This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 10 failed / 2858 succeeded
Started2019-09-20 02:38
Elapsed27m29s
Revision
Buildergke-prow-ssd-pool-1a225945-n47b
Refs master:db1f8da0
82703:9bebce9e
poda03796be-db4f-11e9-85fa-522193c84e76
infra-commit79a4a73da
poda03796be-db4f-11e9-85fa-522193c84e76
repok8s.io/kubernetes
repo-commit72e82d40f964ffc27618909288d63b63d5fb15be
repos{u'k8s.io/kubernetes': u'master:db1f8da036428636a710a9081a5fc18ba30c6ef0,82703:9bebce9edc4244cba9dfbd96d73b8138809173e5'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0920 02:59:36.628810  108596 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 02:59:36.628834  108596 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 02:59:36.628848  108596 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 02:59:36.628858  108596 master.go:259] Using reconciler: 
I0920 02:59:36.630833  108596 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.631137  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.631164  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.632034  108596 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 02:59:36.632091  108596 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 02:59:36.632135  108596 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.632472  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.632550  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.633237  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.633577  108596 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 02:59:36.633613  108596 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.633679  108596 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 02:59:36.633899  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.633924  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.634415  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.634710  108596 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 02:59:36.634746  108596 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.634827  108596 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 02:59:36.634882  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.634902  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.636185  108596 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 02:59:36.636373  108596 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 02:59:36.636259  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.636443  108596 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.636562  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.636579  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.637670  108596 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 02:59:36.637792  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.637823  108596 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.637946  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.637966  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.638030  108596 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 02:59:36.639415  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.640419  108596 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 02:59:36.640687  108596 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.640943  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.641000  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.641123  108596 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 02:59:36.641832  108596 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 02:59:36.641914  108596 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 02:59:36.641982  108596 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.642110  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.642128  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.642448  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.642609  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.642882  108596 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 02:59:36.642967  108596 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 02:59:36.643022  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.643156  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.643175  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.645657  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.647033  108596 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 02:59:36.647067  108596 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 02:59:36.647902  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.648765  108596 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.649155  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.649278  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.650530  108596 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 02:59:36.650567  108596 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 02:59:36.651798  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.652658  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.652895  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.653061  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.654135  108596 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 02:59:36.654284  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.654452  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.654472  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.654546  108596 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 02:59:36.655884  108596 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 02:59:36.655927  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.655933  108596 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 02:59:36.656035  108596 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.656193  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.656216  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.657260  108596 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 02:59:36.657420  108596 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 02:59:36.657412  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.657545  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.657561  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.658291  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.658945  108596 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 02:59:36.658981  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.659021  108596 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 02:59:36.659134  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.659152  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.660040  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.660069  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.660554  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.660986  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.661152  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.661176  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.662736  108596 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 02:59:36.662750  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.662759  108596 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 02:59:36.662918  108596 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 02:59:36.663567  108596 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.663828  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.664277  108596 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.664999  108596 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.665707  108596 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.666264  108596 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.667103  108596 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.667769  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.668063  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.668357  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.669086  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.669734  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.669936  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.670961  108596 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.671360  108596 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.671867  108596 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.672145  108596 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.672679  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.672875  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.673025  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.673204  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.673403  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.673546  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.673787  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.674501  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.674757  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.675430  108596 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.676022  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.676293  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.676682  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.677301  108596 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.677586  108596 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.678217  108596 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.678896  108596 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.679523  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.680165  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.680508  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.680829  108596 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 02:59:36.680855  108596 master.go:461] Enabling API group "authentication.k8s.io".
I0920 02:59:36.680872  108596 master.go:461] Enabling API group "authorization.k8s.io".
I0920 02:59:36.681131  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.681309  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.681362  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.682406  108596 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 02:59:36.682524  108596 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 02:59:36.682573  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.682709  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.682730  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.683893  108596 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 02:59:36.684020  108596 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 02:59:36.684051  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.684200  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.684222  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.684782  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.686211  108596 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 02:59:36.686372  108596 master.go:461] Enabling API group "autoscaling".
I0920 02:59:36.686615  108596 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.686374  108596 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 02:59:36.686388  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.686882  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.687419  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.688018  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.688830  108596 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 02:59:36.688880  108596 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 02:59:36.688985  108596 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.689163  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.689183  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.689853  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.690590  108596 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 02:59:36.690610  108596 master.go:461] Enabling API group "batch".
I0920 02:59:36.690744  108596 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 02:59:36.690747  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.690902  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.690920  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.691607  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.692332  108596 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 02:59:36.692365  108596 master.go:461] Enabling API group "certificates.k8s.io".
I0920 02:59:36.692508  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.692636  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.692662  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.692740  108596 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 02:59:36.694609  108596 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 02:59:36.694687  108596 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 02:59:36.694745  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.726572  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.726571  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.727686  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.727730  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.728636  108596 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 02:59:36.728666  108596 master.go:461] Enabling API group "coordination.k8s.io".
I0920 02:59:36.728685  108596 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 02:59:36.728899  108596 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 02:59:36.728880  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.729139  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.729169  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.730255  108596 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 02:59:36.730289  108596 master.go:461] Enabling API group "extensions".
I0920 02:59:36.730358  108596 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 02:59:36.730495  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.730509  108596 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.730664  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.730684  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.731943  108596 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 02:59:36.732103  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.732216  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.732230  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.732298  108596 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 02:59:36.732855  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.733236  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.733633  108596 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 02:59:36.733665  108596 master.go:461] Enabling API group "networking.k8s.io".
I0920 02:59:36.733700  108596 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 02:59:36.733705  108596 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.733869  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.733909  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.735551  108596 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 02:59:36.735575  108596 master.go:461] Enabling API group "node.k8s.io".
I0920 02:59:36.735633  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.735724  108596 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.735867  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.735882  108596 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 02:59:36.735887  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.736728  108596 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 02:59:36.736888  108596 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.737031  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.737047  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.737130  108596 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 02:59:36.737620  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.739073  108596 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 02:59:36.739104  108596 master.go:461] Enabling API group "policy".
I0920 02:59:36.739138  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.739249  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.739268  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.739291  108596 watch_cache.go:405] Replace watchCache (rev: 30360) 
I0920 02:59:36.739374  108596 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 02:59:36.741588  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.742979  108596 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 02:59:36.743185  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.743442  108596 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 02:59:36.743449  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.743557  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.745085  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.745453  108596 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 02:59:36.745481  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.745504  108596 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 02:59:36.745607  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.745622  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.747204  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.747243  108596 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 02:59:36.747421  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.747457  108596 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 02:59:36.747561  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.747579  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.748638  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.748652  108596 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 02:59:36.748705  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.748780  108596 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 02:59:36.748808  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.748825  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.750005  108596 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 02:59:36.750161  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.750245  108596 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 02:59:36.750282  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.750298  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.751602  108596 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 02:59:36.751634  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.751654  108596 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 02:59:36.751752  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.751767  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.752727  108596 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 02:59:36.752880  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.752902  108596 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 02:59:36.753001  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.753018  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.754872  108596 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 02:59:36.754909  108596 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 02:59:36.755062  108596 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 02:59:36.757279  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.757359  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.757283  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.757676  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.757689  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.759940  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.760124  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.760277  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.761565  108596 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 02:59:36.761720  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.761784  108596 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 02:59:36.761846  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.761866  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.763140  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.763942  108596 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 02:59:36.763980  108596 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 02:59:36.764049  108596 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 02:59:36.764120  108596 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 02:59:36.764295  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.764503  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.764526  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.765103  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.765455  108596 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 02:59:36.765595  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.765789  108596 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 02:59:36.765812  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.765974  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.767138  108596 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 02:59:36.767195  108596 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.767326  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.767351  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.767426  108596 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 02:59:36.767751  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.768569  108596 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 02:59:36.768723  108596 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.768991  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.769115  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.769299  108596 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 02:59:36.769506  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.770357  108596 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 02:59:36.770504  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.770633  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.770651  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.770733  108596 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 02:59:36.771182  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.772398  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.772963  108596 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 02:59:36.773011  108596 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 02:59:36.773096  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.773231  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.773250  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.774695  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.774722  108596 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 02:59:36.774743  108596 master.go:461] Enabling API group "storage.k8s.io".
I0920 02:59:36.774882  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.774934  108596 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 02:59:36.775001  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.775020  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.776112  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.777341  108596 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 02:59:36.777480  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.777605  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.777622  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.777696  108596 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 02:59:36.778259  108596 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 02:59:36.778440  108596 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.778554  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.778570  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.778652  108596 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 02:59:36.779723  108596 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 02:59:36.779881  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.780037  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.780068  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.780162  108596 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 02:59:36.780794  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.782184  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.782587  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.782962  108596 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 02:59:36.783108  108596 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.783224  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.783247  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.783342  108596 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 02:59:36.784550  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.785110  108596 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 02:59:36.785139  108596 master.go:461] Enabling API group "apps".
I0920 02:59:36.785173  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.785282  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.785307  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.785304  108596 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 02:59:36.786439  108596 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 02:59:36.786477  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.786569  108596 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 02:59:36.786577  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.786596  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.787777  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.788823  108596 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 02:59:36.788864  108596 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 02:59:36.788865  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.788997  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.789034  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.789777  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.789942  108596 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 02:59:36.789974  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.790015  108596 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 02:59:36.790101  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.790117  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.791269  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.791529  108596 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 02:59:36.791548  108596 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 02:59:36.791580  108596 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.791600  108596 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 02:59:36.791872  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:36.791890  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:36.792187  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.793024  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.794382  108596 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 02:59:36.794497  108596 master.go:461] Enabling API group "events.k8s.io".
I0920 02:59:36.794450  108596 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 02:59:36.794753  108596 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.794894  108596 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.795217  108596 watch_cache.go:405] Replace watchCache (rev: 30361) 
I0920 02:59:36.795210  108596 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.795444  108596 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.795554  108596 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.795658  108596 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.795855  108596 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.795949  108596 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.796028  108596 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.796117  108596 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.797415  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.797691  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.798738  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.799076  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.799959  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.800291  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.801186  108596 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.801566  108596 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.802374  108596 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.802693  108596 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 02:59:36.802803  108596 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 02:59:36.803445  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.803652  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.803957  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.804866  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.806485  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.807349  108596 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.807797  108596 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.808683  108596 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.809561  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.810036  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.810809  108596 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 02:59:36.810977  108596 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 02:59:36.811845  108596 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.812218  108596 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.812879  108596 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.813647  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.814229  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.815007  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.815870  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.816583  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.817185  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.817939  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.818654  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 02:59:36.818806  108596 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0920 02:59:36.819484  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.820153  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 02:59:36.820344  108596 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0920 02:59:36.821169  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.821879  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.822224  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.822877  108596 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.823447  108596 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.824038  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.824738  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 02:59:36.824811  108596 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0920 02:59:36.826145  108596 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.827135  108596 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.827764  108596 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.828676  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.829042  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.829411  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.830269  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.830637  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.831066  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.831967  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.832307  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.832688  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 02:59:36.832834  108596 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0920 02:59:36.832901  108596 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0920 02:59:36.833662  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.834352  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.835101  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.835821  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.836745  108596 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d1ae1f25-dcf3-46a6-a040-e3a4a2a5c4bb", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 02:59:36.840662  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:36.840697  108596 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0920 02:59:36.840708  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:36.840719  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:36.840727  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:36.840735  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:36.840768  108596 httplog.go:90] GET /healthz: (256.351µs) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:36.842869  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.626322ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:36.846862  108596 httplog.go:90] GET /api/v1/services: (1.194788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:36.850673  108596 httplog.go:90] GET /api/v1/services: (1.067374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:36.852542  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:36.852578  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:36.852591  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:36.852626  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:36.852634  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:36.852657  108596 httplog.go:90] GET /healthz: (212.166µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:36.854436  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.688585ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48332]
I0920 02:59:36.856524  108596 httplog.go:90] POST /api/v1/namespaces: (1.801248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48332]
I0920 02:59:36.856722  108596 httplog.go:90] GET /api/v1/services: (1.501456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48334]
I0920 02:59:36.856928  108596 httplog.go:90] GET /api/v1/services: (2.440969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:36.858832  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.402366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48332]
I0920 02:59:36.860768  108596 httplog.go:90] POST /api/v1/namespaces: (1.514471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:36.862191  108596 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (806.135µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:36.864132  108596 httplog.go:90] POST /api/v1/namespaces: (1.251518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:36.944987  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:36.945023  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:36.945036  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:36.945045  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:36.945055  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:36.945097  108596 httplog.go:90] GET /healthz: (260.775µs) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:36.953355  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:36.953388  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:36.953401  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:36.953411  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:36.953418  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:36.953448  108596 httplog.go:90] GET /healthz: (261.533µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.041515  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.041551  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.041564  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.041574  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.041584  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.041627  108596 httplog.go:90] GET /healthz: (260.32µs) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:37.053303  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.053353  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.053365  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.053375  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.053386  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.053416  108596 httplog.go:90] GET /healthz: (259.189µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.141885  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.141919  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.141931  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.141941  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.141949  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.141986  108596 httplog.go:90] GET /healthz: (262.189µs) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:37.155265  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.155300  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.155330  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.155340  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.155348  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.155405  108596 httplog.go:90] GET /healthz: (294.416µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.241583  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.241620  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.241633  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.241642  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.241651  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.241681  108596 httplog.go:90] GET /healthz: (251.512µs) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:37.253433  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.253482  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.253494  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.253503  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.253511  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.253556  108596 httplog.go:90] GET /healthz: (303.014µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.341550  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.341586  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.341598  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.341608  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.341618  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.341648  108596 httplog.go:90] GET /healthz: (243.532µs) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:37.353344  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.353379  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.353392  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.353402  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.353410  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.353447  108596 httplog.go:90] GET /healthz: (274.878µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.441495  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.441530  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.441542  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.441551  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.441559  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.441590  108596 httplog.go:90] GET /healthz: (236.046µs) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:37.453356  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.453393  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.453406  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.453416  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.453424  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.453457  108596 httplog.go:90] GET /healthz: (278.13µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.541436  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.541468  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.541477  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.541483  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.541489  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.541513  108596 httplog.go:90] GET /healthz: (210.679µs) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:37.553391  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 02:59:37.553431  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.553442  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.553454  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.553461  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.553502  108596 httplog.go:90] GET /healthz: (265.993µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.628607  108596 client.go:361] parsed scheme: "endpoint"
I0920 02:59:37.628707  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 02:59:37.642594  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.642632  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.642643  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.642651  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.642688  108596 httplog.go:90] GET /healthz: (1.29478ms) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:37.654148  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.654182  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.654195  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.654203  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.654239  108596 httplog.go:90] GET /healthz: (1.10098ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.742226  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.742251  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.742262  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.742271  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.742309  108596 httplog.go:90] GET /healthz: (989.916µs) 0 [Go-http-client/1.1 127.0.0.1:48330]
I0920 02:59:37.754039  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.754067  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.754078  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.754086  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.754125  108596 httplog.go:90] GET /healthz: (971.996µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.843365  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.843400  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 02:59:37.843411  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 02:59:37.843419  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 02:59:37.843456  108596 httplog.go:90] GET /healthz: (1.727533ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:37.843550  108596 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.551029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:37.843827  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.088467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48334]
I0920 02:59:37.843850  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.669936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48330]
I0920 02:59:37.845206  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (974.675µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.845684  108596 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.250502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48334]
I0920 02:59:37.845729  108596 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.737184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:37.846497  108596 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0920 02:59:37.847970  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.843637ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.848209  108596 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.493313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:37.848524  108596 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.821389ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48334]
I0920 02:59:37.850202  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.440639ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.850416  108596 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.538709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:37.850647  108596 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0920 02:59:37.850662  108596 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0920 02:59:37.851339  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (770.421µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.852456  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (800.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.853659  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (753.223µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.855077  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.11968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:37.855306  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.855833  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:37.856056  108596 httplog.go:90] GET /healthz: (2.155034ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.857337  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (800.375µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.858526  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (911.687µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.860671  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.753071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.860857  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0920 02:59:37.861833  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (860.824µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.864006  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.590514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.864152  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0920 02:59:37.865300  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.034307ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.867293  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.53367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.867657  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0920 02:59:37.868789  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (867.364µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.870804  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.411433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.870947  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0920 02:59:37.872394  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.314121ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.874598  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.79831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.874787  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0920 02:59:37.875799  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (839.817µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.877670  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.512159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.877952  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0920 02:59:37.879057  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (949.375µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.880882  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.330643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.881152  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0920 02:59:37.882419  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (990.849µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.884477  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.541947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.884690  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0920 02:59:37.885878  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (989.472µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.888912  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.981983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.889265  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0920 02:59:37.890560  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.048539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.893660  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.725067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.894074  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0920 02:59:37.895175  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (724.25µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.897061  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.502041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.897505  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0920 02:59:37.898772  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (897.854µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.902011  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.737432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.902597  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0920 02:59:37.903568  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (803.877µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.907011  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.447787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.907208  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0920 02:59:37.908661  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (933.837µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.911025  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.815848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.911410  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0920 02:59:37.912691  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (914.06µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.914955  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.749769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.915291  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0920 02:59:37.916368  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (753.155µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.918167  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.189982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.918372  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0920 02:59:37.919288  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (640.923µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.921014  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.152188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.921346  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0920 02:59:37.922449  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (795.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.924503  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.543948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.924765  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 02:59:37.925578  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (661.732µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.926930  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.028208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.927211  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0920 02:59:37.928111  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (720.893µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.929766  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.312578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.930082  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0920 02:59:37.931154  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (761.313µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.932646  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.11353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.932811  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0920 02:59:37.934704  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (897.589µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.936553  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.936781  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0920 02:59:37.937696  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (751.426µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.939482  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.256766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.939692  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0920 02:59:37.940672  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (802.824µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.942949  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.942970  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:37.942992  108596 httplog.go:90] GET /healthz: (1.800819ms) 0 [Go-http-client/1.1 127.0.0.1:48368]
I0920 02:59:37.943758  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.80128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.943913  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0920 02:59:37.944866  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (766.387µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.946435  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.23506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.946612  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0920 02:59:37.947579  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (814.608µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.950028  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.629069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.950291  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0920 02:59:37.951287  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (795.284µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.953962  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:37.954550  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:37.954819  108596 httplog.go:90] GET /healthz: (1.774482ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:37.954479  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.722956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.955394  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0920 02:59:37.956568  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (860.636µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.958252  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.201268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.958528  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 02:59:37.959505  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (810.797µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.961290  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.396104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.961492  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 02:59:37.962466  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (765.116µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.964642  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.791529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.965113  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 02:59:37.966168  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (726.044µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.967959  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.303759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.968299  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 02:59:37.969368  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (740.835µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.971432  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.554363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.971723  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 02:59:37.972798  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (728.855µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.975053  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.640886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.975350  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 02:59:37.976274  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (753.793µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.978254  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.542601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.978551  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 02:59:37.979524  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (649.023µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.981250  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.416549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.981561  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 02:59:37.982692  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (751.643µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.984465  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.369717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.984718  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 02:59:37.985837  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (895.866µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.987587  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.330821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.987783  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 02:59:37.988769  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (710.587µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.990553  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.990777  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0920 02:59:37.991682  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (729.391µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.993149  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.21914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.993360  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 02:59:37.994242  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (760.397µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.995939  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.376002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.996129  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0920 02:59:37.997012  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (704.31µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.998903  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.536823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:37.999140  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 02:59:37.999976  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (678.279µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.002052  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.82602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.002293  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 02:59:38.003435  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (975.726µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.005256  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.535105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.005510  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 02:59:38.007543  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.90013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.009561  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.627847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.009873  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 02:59:38.011069  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (809.607µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.012739  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.189698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.012907  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 02:59:38.018198  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (5.121359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.020066  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.299863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.020240  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0920 02:59:38.021604  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.213443ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.022953  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.084868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.023378  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 02:59:38.024219  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (719.752µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.025923  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.350773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.026243  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0920 02:59:38.027136  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (732.954µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.028584  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.152457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.028892  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 02:59:38.029803  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (753.275µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.031146  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.031745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.031341  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 02:59:38.042235  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.042262  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.042302  108596 httplog.go:90] GET /healthz: (1.027585ms) 0 [Go-http-client/1.1 127.0.0.1:48368]
I0920 02:59:38.042770  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.616942ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.057184  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.057216  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.057256  108596 httplog.go:90] GET /healthz: (2.710396ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.063105  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.927289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.064166  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 02:59:38.082520  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.365312ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.104299  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.107961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.104573  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 02:59:38.122702  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.42817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.145478  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.145934  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.145781  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.223131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.146471  108596 httplog.go:90] GET /healthz: (3.118855ms) 0 [Go-http-client/1.1 127.0.0.1:48368]
I0920 02:59:38.146518  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 02:59:38.153868  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.153890  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.153917  108596 httplog.go:90] GET /healthz: (834.801µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.163847  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (2.36743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.183220  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.925748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.183554  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0920 02:59:38.202664  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.287376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.223763  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.476043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.224007  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0920 02:59:38.242326  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.242359  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.242395  108596 httplog.go:90] GET /healthz: (1.142717ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:38.243072  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.359719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.254280  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.254337  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.254392  108596 httplog.go:90] GET /healthz: (1.167689ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.263262  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.093746ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.263639  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0920 02:59:38.282659  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.415372ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.303536  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.342196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.303865  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0920 02:59:38.322409  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.142131ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.343935  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.343966  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.344003  108596 httplog.go:90] GET /healthz: (2.749341ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:38.344622  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.921394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.344882  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0920 02:59:38.354088  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.354119  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.354152  108596 httplog.go:90] GET /healthz: (1.044255ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.362268  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.098474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.383686  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.49411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.384114  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 02:59:38.402498  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.310544ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.423475  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.31031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.424015  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0920 02:59:38.442883  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.442916  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.442953  108596 httplog.go:90] GET /healthz: (1.642662ms) 0 [Go-http-client/1.1 127.0.0.1:48368]
I0920 02:59:38.443023  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.355777ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.453947  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.453975  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.454005  108596 httplog.go:90] GET /healthz: (886.883µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.462875  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.771655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.463289  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0920 02:59:38.482515  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.299812ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.503475  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.268404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.503729  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0920 02:59:38.522593  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.397713ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.544000  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.544014  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.199231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.544031  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.544057  108596 httplog.go:90] GET /healthz: (1.462647ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:38.544219  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0920 02:59:38.556404  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.556430  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.556471  108596 httplog.go:90] GET /healthz: (3.335163ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.562160  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.037761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.585764  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.983289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.586009  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 02:59:38.602461  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.258348ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.623566  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.347039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.623839  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 02:59:38.642720  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.642750  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.642798  108596 httplog.go:90] GET /healthz: (1.230073ms) 0 [Go-http-client/1.1 127.0.0.1:48368]
I0920 02:59:38.642836  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.468197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.653976  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.654011  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.654045  108596 httplog.go:90] GET /healthz: (923.824µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.666459  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.97632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.666800  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 02:59:38.682296  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.120788ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.703723  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.571509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.703967  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 02:59:38.723665  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.547245ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.742370  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.742400  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.742438  108596 httplog.go:90] GET /healthz: (1.157702ms) 0 [Go-http-client/1.1 127.0.0.1:48368]
I0920 02:59:38.744518  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.078428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.744794  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 02:59:38.754685  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.754713  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.754754  108596 httplog.go:90] GET /healthz: (1.644203ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.762478  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.335665ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.784212  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.035805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.784513  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 02:59:38.802822  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.340158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.823214  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.990818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.823493  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 02:59:38.842171  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.842204  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.842240  108596 httplog.go:90] GET /healthz: (950.011µs) 0 [Go-http-client/1.1 127.0.0.1:48368]
I0920 02:59:38.842515  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.317186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.853871  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.853895  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.853933  108596 httplog.go:90] GET /healthz: (807.426µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.864745  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.2539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.865058  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 02:59:38.882373  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.101546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.903544  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.380711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.903777  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 02:59:38.922714  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.530382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.943790  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.116522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:38.944031  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 02:59:38.944302  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.944347  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.944381  108596 httplog.go:90] GET /healthz: (3.091542ms) 0 [Go-http-client/1.1 127.0.0.1:48368]
I0920 02:59:38.954164  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:38.954208  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:38.954248  108596 httplog.go:90] GET /healthz: (1.032799ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.962259  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.119107ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.983412  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.177171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:38.983756  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0920 02:59:39.002491  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.305756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.023362  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.092462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.023579  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 02:59:39.042867  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.042896  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.042929  108596 httplog.go:90] GET /healthz: (1.680594ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:39.042990  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.546396ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.054120  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.054153  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.054191  108596 httplog.go:90] GET /healthz: (1.015508ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.063711  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.525404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.063962  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0920 02:59:39.082620  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.416403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.103654  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.423629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.103905  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 02:59:39.122551  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.301858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.143907  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.143939  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.143974  108596 httplog.go:90] GET /healthz: (2.609471ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:39.145034  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.313217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.145278  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 02:59:39.153922  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.153953  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.153986  108596 httplog.go:90] GET /healthz: (862.66µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.163825  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.179095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.183341  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.137335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.183539  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 02:59:39.202474  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.28933ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.223642  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.439064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.224055  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 02:59:39.242411  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.242447  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.242485  108596 httplog.go:90] GET /healthz: (1.268901ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:39.242594  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.102652ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.254134  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.254161  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.254196  108596 httplog.go:90] GET /healthz: (996.751µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.263180  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.030414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.263408  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 02:59:39.282492  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.288907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.303562  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.307027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.303798  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0920 02:59:39.322690  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.444587ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.342973  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.343004  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.343123  108596 httplog.go:90] GET /healthz: (1.746165ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:39.343373  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.136204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.343830  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 02:59:39.354023  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.354055  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.354090  108596 httplog.go:90] GET /healthz: (974.721µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.362135  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (959.959µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.383520  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.328459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.383820  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0920 02:59:39.402592  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.41208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.423600  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.418856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.423846  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 02:59:39.442281  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.442347  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.442386  108596 httplog.go:90] GET /healthz: (1.153127ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:39.442495  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.094736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.456519  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.456550  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.456593  108596 httplog.go:90] GET /healthz: (1.116319ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.463304  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.180538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.463570  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 02:59:39.482629  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.375374ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.502952  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.702645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.503196  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 02:59:39.522396  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.19455ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.542586  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.542622  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.542655  108596 httplog.go:90] GET /healthz: (826.13µs) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:39.553875  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (12.457203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.554262  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 02:59:39.567140  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.567173  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.567213  108596 httplog.go:90] GET /healthz: (14.136777ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:39.567706  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (6.597428ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.625874  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (44.705879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.626176  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 02:59:39.628289  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.875896ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.631034  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.634602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.636113  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.712372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.636331  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0920 02:59:39.642720  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.642744  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.642775  108596 httplog.go:90] GET /healthz: (1.333967ms) 0 [Go-http-client/1.1 127.0.0.1:48368]
I0920 02:59:39.644588  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.001395ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.646277  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.336488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.655592  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.655622  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.655675  108596 httplog.go:90] GET /healthz: (2.523343ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.663047  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.895082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.663272  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 02:59:39.683871  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.648907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.685717  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.491231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.729359  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (28.151894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.729640  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 02:59:39.731510  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.610694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.734017  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.146127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.751380  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.751407  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (10.252415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.751413  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.751455  108596 httplog.go:90] GET /healthz: (10.265578ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:39.751773  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 02:59:39.759988  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.760022  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.760059  108596 httplog.go:90] GET /healthz: (6.949296ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.762237  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.082697ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.763929  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.31357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.783487  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.264195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.783722  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 02:59:39.802574  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.354116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.805275  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.239058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.824138  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.863248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.824382  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 02:59:39.842304  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.070926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.842498  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.842516  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.842563  108596 httplog.go:90] GET /healthz: (1.193821ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:39.844963  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.238259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.854548  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.854573  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.854605  108596 httplog.go:90] GET /healthz: (1.463961ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.863370  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.198243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.863618  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 02:59:39.882914  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.677439ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.884488  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.043343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.903388  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.111438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.903769  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 02:59:39.922565  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.392354ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.924369  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.379994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.942254  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.942285  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.942359  108596 httplog.go:90] GET /healthz: (999.143µs) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:39.943668  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.439374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.943903  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0920 02:59:39.953972  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:39.954006  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:39.954043  108596 httplog.go:90] GET /healthz: (898.202µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.962117  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (945.917µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.963588  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.052973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.984118  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.373513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:39.984781  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 02:59:40.002750  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.5876ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.004383  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.258536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.035114  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (12.292793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.035996  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 02:59:40.042255  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:40.042284  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:40.042352  108596 httplog.go:90] GET /healthz: (1.057253ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:40.042634  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.414164ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.045158  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.105381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.054490  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:40.054519  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:40.054554  108596 httplog.go:90] GET /healthz: (1.170312ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.063594  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.415768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.063828  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 02:59:40.083518  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.359698ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.085213  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.231465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.110984  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (9.848709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.111244  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 02:59:40.122396  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.171499ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.124285  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.49694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.143166  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 02:59:40.143189  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 02:59:40.143233  108596 httplog.go:90] GET /healthz: (1.651802ms) 0 [Go-http-client/1.1 127.0.0.1:48370]
I0920 02:59:40.144249  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.034519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.144513  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 02:59:40.154159  108596 httplog.go:90] GET /healthz: (964.614µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.155749  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.038073ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.160182  108596 httplog.go:90] POST /api/v1/namespaces: (4.071071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.161630  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (934.312µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.164996  108596 httplog.go:90] POST /api/v1/namespaces/default/services: (3.001179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.166553  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.099795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.168584  108596 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.552146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.242262  108596 httplog.go:90] GET /healthz: (933.435µs) 200 [Go-http-client/1.1 127.0.0.1:48368]
W0920 02:59:40.243024  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243088  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243101  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243130  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243140  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243153  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243163  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243180  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243199  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243209  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.243275  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 02:59:40.243297  108596 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0920 02:59:40.243306  108596 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 02:59:40.244499  108596 shared_informer.go:197] Waiting for caches to sync for scheduler
I0920 02:59:40.244728  108596 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 02:59:40.244744  108596 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 02:59:40.245762  108596 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (651.945µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 02:59:40.246492  108596 get.go:251] Starting watch for /api/v1/pods, rv=30360 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m1s
E0920 02:59:40.284904  108596 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:34005/apis/events.k8s.io/v1beta1/namespaces/permit-pluginecf6d943-6e44-4b68-b992-1c59a01bab56/events: dial tcp 127.0.0.1:34005: connect: connection refused' (may retry after sleeping)
I0920 02:59:40.344682  108596 shared_informer.go:227] caches populated
I0920 02:59:40.344718  108596 shared_informer.go:204] Caches are synced for scheduler 
I0920 02:59:40.345077  108596 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.345105  108596 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.345558  108596 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.345578  108596 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.345783  108596 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.345806  108596 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.346117  108596 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.346138  108596 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.346233  108596 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.346250  108596 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.346641  108596 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.346658  108596 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.346696  108596 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.346708  108596 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.347057  108596 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.347071  108596 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.347352  108596 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.347364  108596 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (572.933µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:40.347367  108596 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.347512  108596 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.347524  108596 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0920 02:59:40.348226  108596 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (348.262µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48672]
I0920 02:59:40.348288  108596 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (442.21µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 02:59:40.348367  108596 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (449.233µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48674]
I0920 02:59:40.348832  108596 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30361 labels= fields= timeout=8m7s
I0920 02:59:40.348889  108596 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30360 labels= fields= timeout=5m27s
I0920 02:59:40.348941  108596 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30361 labels= fields= timeout=5m42s
I0920 02:59:40.350538  108596 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30361 labels= fields= timeout=7m18s
I0920 02:59:40.351171  108596 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (468.106µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48678]
I0920 02:59:40.351643  108596 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (372.209µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48682]
I0920 02:59:40.352100  108596 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (357.621µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48680]
I0920 02:59:40.352649  108596 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (453.616µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48686]
I0920 02:59:40.353088  108596 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (314.715µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48688]
I0920 02:59:40.354281  108596 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30360 labels= fields= timeout=5m53s
I0920 02:59:40.354618  108596 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30360 labels= fields= timeout=8m5s
I0920 02:59:40.355024  108596 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30361 labels= fields= timeout=6m28s
I0920 02:59:40.355047  108596 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30360 labels= fields= timeout=9m32s
I0920 02:59:40.354285  108596 get.go:251] Starting watch for /api/v1/services, rv=30605 labels= fields= timeout=7m55s
I0920 02:59:40.358898  108596 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (5.711996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48690]
I0920 02:59:40.361780  108596 get.go:251] Starting watch for /api/v1/nodes, rv=30360 labels= fields= timeout=9m29s
I0920 02:59:40.445005  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445042  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445049  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445056  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445062  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445068  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445074  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445080  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445085  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445095  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445105  108596 shared_informer.go:227] caches populated
I0920 02:59:40.445185  108596 node_lifecycle_controller.go:327] Sending events to api server.
I0920 02:59:40.445269  108596 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0920 02:59:40.445291  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 02:59:40.445385  108596 taint_manager.go:162] Sending events to api server.
I0920 02:59:40.445459  108596 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 02:59:40.445489  108596 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0920 02:59:40.445501  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 02:59:40.445520  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 02:59:40.445625  108596 node_lifecycle_controller.go:488] Starting node controller
I0920 02:59:40.445658  108596 shared_informer.go:197] Waiting for caches to sync for taint
I0920 02:59:40.448159  108596 httplog.go:90] POST /api/v1/nodes: (1.986211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:40.448427  108596 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0920 02:59:40.450653  108596 httplog.go:90] PUT /api/v1/nodes/testnode/status: (2.004322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:40.454955  108596 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods: (3.896979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:40.455171  108596 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pidpressure-fake-name
I0920 02:59:40.455186  108596 scheduler.go:530] Attempting to schedule pod: node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pidpressure-fake-name
I0920 02:59:40.455348  108596 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pidpressure-fake-name", node "testnode"
I0920 02:59:40.455368  108596 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0920 02:59:40.455418  108596 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0920 02:59:40.458035  108596 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name/binding: (2.168985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:40.458263  108596 scheduler.go:662] pod node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0920 02:59:40.460219  108596 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/events: (1.651421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:40.557056  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.449518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:40.657415  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.812759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:40.758505  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.815635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:40.857159  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.56433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:40.957148  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.575323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.057366  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.680967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.157042  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.499433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.257252  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.656259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.347929  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:41.348688  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:41.348732  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:41.353932  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:41.354048  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:41.357738  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.14729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.361399  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:41.457497  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.797306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.558623  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.847178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.657216  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.581916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.757276  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.640825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.857856  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.231352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:41.958001  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.420777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.057153  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.554469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.157344  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.693354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.257562  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.974793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.348862  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:42.348910  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:42.348924  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:42.354177  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:42.354230  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:42.357157  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.495675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.361624  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:42.457887  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.318212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.557243  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.566143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.657418  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.781704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.757493  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.851171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.857746  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.075448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:42.957759  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.130116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:43.057956  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.351358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:43.157637  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.04288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 02:59:43.209821  108596 factory.go:590] Error getting pod permit-pluginecf6d943-6e44-4b68-b992-1c59a01bab56/test-pod for retry: Get http://127.0.0.1:34005/api/v1/namespaces/permit-pluginecf6d943-6e44-4b68-b992-1c59a01bab56/pods/test-pod: dial tcp 127.0.0.1:34005: connect: connection refused; retrying...
I0920 02:59:43.257355  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.710209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:43.349030  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:43.349076  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:43.349100  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:43.354453  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:43.354536  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:43.357455  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.839118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:43.361821  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:43.457536  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.75026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:43.557437  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.771295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:43.663037  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.977074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:43.757413  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.724751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:43.857184  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.532817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:43.957156  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.607368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.057402  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.759849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.157457  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.841959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.257400  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.669595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.349227  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:44.349285  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:44.349298  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:44.354628  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:44.354723  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:44.357538  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.781495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.362025  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:44.457264  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.647324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.557063  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.508182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.657109  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.503769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.757105  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.504745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.857216  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.615307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:44.957134  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.546423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.057421  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.778563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.157231  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.642028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.257274  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.679041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.349396  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:45.349443  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:45.349467  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:45.354776  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:45.355091  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:45.357481  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.825273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.362211  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:45.457544  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.932262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.557645  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.961862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.657213  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.577989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.757050  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.489699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.857309  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.693178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:45.957715  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.066821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.057450  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.840377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.157158  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.560769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.257306  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.693527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.349527  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:46.349582  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:46.349601  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:46.354926  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:46.355188  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:46.357424  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.829385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.362411  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:46.457887  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.232724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.557307  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.665122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.657464  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.831019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.760468  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (4.493419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.857452  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.806257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:46.957662  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.049426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.057292  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.709585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.157385  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.751168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.257574  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.982548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.349732  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:47.349788  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:47.349807  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:47.355103  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:47.355372  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:47.357103  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.529074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.362540  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:47.457038  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.4492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.557303  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.666461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.657307  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.685423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.757193  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.623518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.857353  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.647081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:47.957282  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.67868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:48.057203  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.552884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:48.157396  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.674619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:48.257326  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.679701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:48.349924  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:48.349973  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:48.349985  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:48.355518  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:48.357761  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:48.359896  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.744121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:48.362720  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:48.457414  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.763372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:48.557463  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.816051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:48.657492  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.824272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:48.728852  108596 factory.go:606] Attempting to bind signalling-pod to test-node-1
I0920 02:59:48.729398  108596 scheduler.go:500] Failed to bind pod: permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod
E0920 02:59:48.729419  108596 scheduler.go:502] scheduler cache ForgetPod failed: pod 56aacda7-fac6-46c0-bc33-e002b3c8acf1 wasn't assumed so cannot be forgotten
E0920 02:59:48.729439  108596 scheduler.go:653] error binding pod: Post http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod/binding: dial tcp 127.0.0.1:36219: connect: connection refused
E0920 02:59:48.729464  108596 factory.go:557] Error scheduling permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod: Post http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod/binding: dial tcp 127.0.0.1:36219: connect: connection refused; retrying
I0920 02:59:48.729503  108596 factory.go:615] Updating pod condition for permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0920 02:59:48.729881  108596 scheduler.go:333] Error updating the condition of the pod permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod: Put http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod/status: dial tcp 127.0.0.1:36219: connect: connection refused
E0920 02:59:48.729994  108596 factory.go:590] Error getting pod permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod for retry: Get http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod: dial tcp 127.0.0.1:36219: connect: connection refused; retrying...
E0920 02:59:48.730249  108596 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:36219/apis/events.k8s.io/v1beta1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/events: dial tcp 127.0.0.1:36219: connect: connection refused' (may retry after sleeping)
I0920 02:59:48.757291  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.708914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:48.857405  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.739426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 02:59:48.930521  108596 factory.go:590] Error getting pod permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod for retry: Get http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod: dial tcp 127.0.0.1:36219: connect: connection refused; retrying...
I0920 02:59:48.957061  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.425514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:49.057199  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.571963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:49.157443  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.850413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:49.257119  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.516113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 02:59:49.331088  108596 factory.go:590] Error getting pod permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod for retry: Get http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod: dial tcp 127.0.0.1:36219: connect: connection refused; retrying...
I0920 02:59:49.350114  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:49.350163  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:49.350177  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:49.355638  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:49.357296  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.67934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:49.357928  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:49.362889  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:49.457459  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.726546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:49.557396  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.777598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:49.657293  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.662835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:49.757489  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.873742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:49.857201  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.597341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:49.957372  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.748897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.057267  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.653051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 02:59:50.131720  108596 factory.go:590] Error getting pod permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod for retry: Get http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod: dial tcp 127.0.0.1:36219: connect: connection refused; retrying...
I0920 02:59:50.156686  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.654563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.157257  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.479852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 02:59:50.158149  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.124062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.159553  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.024333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.257191  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.522997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.350242  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:50.350308  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:50.350338  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:50.355807  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:50.357150  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.55982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.358110  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:50.363083  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:50.457146  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.426966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.557213  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.621306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.657421  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.687085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.763661  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.849807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.857553  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.737293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:50.957089  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.464126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:51.057047  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.463294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:51.157043  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.445825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:51.257046  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.428034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:51.350409  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:51.350465  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:51.350479  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:51.355937  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:51.357137  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.519409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:51.358298  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:51.363268  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:51.459407  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.9149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:51.557933  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.175959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:51.657623  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.964915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 02:59:51.732352  108596 factory.go:590] Error getting pod permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod for retry: Get http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod: dial tcp 127.0.0.1:36219: connect: connection refused; retrying...
I0920 02:59:51.757378  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.740457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:51.857448  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.809349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:51.957163  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.544957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:52.057131  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.525312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:52.157460  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.871929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 02:59:52.231159  108596 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:34005/apis/events.k8s.io/v1beta1/namespaces/permit-pluginecf6d943-6e44-4b68-b992-1c59a01bab56/events: dial tcp 127.0.0.1:34005: connect: connection refused' (may retry after sleeping)
I0920 02:59:52.257289  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.678842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:52.350611  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:52.350630  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:52.350671  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:52.356073  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:52.357569  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.985686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:52.358481  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:52.363440  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:52.458360  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.737679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:52.557488  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.808871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:52.657834  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.07066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:52.757389  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.793284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:52.857598  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.930931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:52.957486  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.860949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.057431  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.843413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.157118  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.50097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.257252  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.657332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.350773  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:53.350806  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:53.350828  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:53.356387  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:53.357959  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.248096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.358655  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:53.363611  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:53.458713  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (3.111026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.557420  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.65876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.657497  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.843783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.757395  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.747221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.857469  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.814524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:53.957386  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.7009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:54.057724  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.035194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:54.157790  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.139391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:54.257710  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.05737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:54.350885  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:54.350937  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:54.350953  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:54.356553  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:54.357564  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.904062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:54.358829  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:54.363755  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:54.457668  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.020887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:54.559211  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (3.589843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:54.657647  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.990035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:54.757193  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.561644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:54.857305  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.671732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 02:59:54.932856  108596 factory.go:590] Error getting pod permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod for retry: Get http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod: dial tcp 127.0.0.1:36219: connect: connection refused; retrying...
I0920 02:59:54.956852  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.328316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.057452  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.82254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.157439  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.807685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.257520  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.820311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.351046  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:55.351092  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:55.351103  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:55.356711  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:55.357221  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.61756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.359018  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:55.363958  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:55.457656  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.999183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.557415  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.71311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.657417  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.720717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.757623  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.949061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.858650  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.712374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:55.957759  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.167026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.057428  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.410017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.156941  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.386494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.257038  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.475406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.351251  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:56.351282  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:56.351291  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:56.356878  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:56.357264  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.630412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.359232  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:56.364151  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:56.460989  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (5.144521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.557381  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.769351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.657406  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.742152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.757215  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.597184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.858835  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (3.165741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:56.957586  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.917099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.057470  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.816577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.157212  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.529422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.257055  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.425506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.351402  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:57.351448  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:57.351468  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:57.357056  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:57.357274  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.6224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.359821  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:57.364307  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:57.457204  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.607392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.557393  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.769105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.657220  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.33701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.757259  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.632079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.857026  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.426186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:57.957127  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.455485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.057231  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.551586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.157351  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.705045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.257307  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.615021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.351525  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:58.351541  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:58.351616  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:58.357204  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.603446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.357203  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:58.359956  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:58.364485  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:58.456928  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.26379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.557101  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.452686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.657062  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.447927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.757209  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.604308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.857174  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.406604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:58.957158  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.564572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:59.057238  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.632441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:59.161069  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.605044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:59.256964  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.353541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:59.352632  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:59.352672  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:59.352684  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:59.357120  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.521979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:59.357374  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:59.360131  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:59.364662  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 02:59:59.457004  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.386118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:59.557084  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.441284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:59.657077  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.451352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:59.757360  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.60928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 02:59:59.857246  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.623542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 02:59:59.862273  108596 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:36219/apis/events.k8s.io/v1beta1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/events: dial tcp 127.0.0.1:36219: connect: connection refused' (may retry after sleeping)
I0920 02:59:59.957041  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.453673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.057153  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.529821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.156725  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.597541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.157419  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.418451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:00.158547  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.062397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.159962  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.069947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.257033  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.461727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.352745  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:00.352765  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:00.352746  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:00.356997  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.426723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.357505  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:00.360263  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:00.364829  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:00.457123  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.4319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.557141  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.540865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.657191  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.510807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.756948  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.333045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.857088  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.501173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:00.957889  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.344805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:01.057105  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.467661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:01.157124  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.50445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:01.257560  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.538624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 03:00:01.333416  108596 factory.go:590] Error getting pod permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod for retry: Get http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod: dial tcp 127.0.0.1:36219: connect: connection refused; retrying...
I0920 03:00:01.352933  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:01.352950  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:01.352933  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:01.357161  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.594357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:01.357691  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:01.360411  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:01.365007  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:01.456908  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.351622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:01.556975  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.327899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:01.657256  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.608124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:01.757201  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.570111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:01.857255  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.664554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:01.957392  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.778967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.057068  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.49774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.157142  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.523666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.257592  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.976353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.353073  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:02.353113  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:02.353113  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:02.357115  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.480332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.357808  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:02.360601  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:02.365181  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:02.457113  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.479287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.557037  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.480649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.657414  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.778405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.757781  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.529481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.857290  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.686462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:02.957140  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.522124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.057334  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.678257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.157249  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.671257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.257668  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.022307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.353207  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:03.353241  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:03.353260  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:03.357253  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.662862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.357981  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:03.360719  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:03.365375  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
E0920 03:00:03.452365  108596 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:34005/apis/events.k8s.io/v1beta1/namespaces/permit-pluginecf6d943-6e44-4b68-b992-1c59a01bab56/events: dial tcp 127.0.0.1:34005: connect: connection refused' (may retry after sleeping)
I0920 03:00:03.457102  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.449662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.557130  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.509315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.657108  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.517865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.757134  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.509113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.857300  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.693656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:03.958637  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.682522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.057129  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.492726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.157007  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.436995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.257116  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.533263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.353366  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:04.353447  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:04.353463  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:04.356990  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.371086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.358130  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:04.360855  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:04.365596  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:04.457091  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.460344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.557216  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.603254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.657198  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.583271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.757185  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.555085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.857023  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.354268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:04.957495  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.760784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.057272  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.618241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.157347  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.620049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.257386  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.708266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.353519  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:05.357601  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:05.357640  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:05.358208  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.519234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.358230  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:05.361218  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:05.365777  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:05.457209  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.563229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.557086  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.464879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.657409  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.783509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.757153  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.483958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.857422  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.795423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:05.957237  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.622098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.057086  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.463918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.158163  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.579911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.257575  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.897748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.355348  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:06.357227  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.573777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.359608  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:06.360535  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:06.360565  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:06.361372  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:06.365951  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:06.459389  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (3.813391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.557041  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.41013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.657548  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.397161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.756863  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.297302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.856900  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.312688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:06.956985  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.454392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.057147  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.498991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.157207  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.566929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.256968  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.345924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.355532  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:07.357135  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.529053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.359752  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:07.360669  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:07.360691  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:07.361504  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:07.366120  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:07.457334  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.698614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.557097  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.474711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.659514  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (3.928099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.757048  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.487274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.857153  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.524155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:07.960573  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.01689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:08.057014  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.381805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:08.156819  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.22456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:08.259225  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.738522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:08.356170  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:08.356989  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.404248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:08.359887  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:08.360778  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:08.360795  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:08.361666  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:08.366664  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:08.458020  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.318205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:08.557699  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.971063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:08.656847  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.264359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:08.756964  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.378964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 03:00:08.810422  108596 factory.go:590] Error getting pod permit-pluginecf6d943-6e44-4b68-b992-1c59a01bab56/test-pod for retry: Get http://127.0.0.1:34005/api/v1/namespaces/permit-pluginecf6d943-6e44-4b68-b992-1c59a01bab56/pods/test-pod: dial tcp 127.0.0.1:34005: connect: connection refused; retrying...
I0920 03:00:08.857014  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.39472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:08.957581  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.560801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.058279  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.639102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.157072  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.46785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.257040  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.396485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.356416  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:09.357243  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.622617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.360038  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:09.360947  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:09.360979  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:09.361821  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:09.366906  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:09.456888  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.306189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.557518  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.934543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.657455  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.762129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.757060  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.484625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.857214  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.552615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:09.957290  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.702514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:10.057824  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.273655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
E0920 03:00:10.072646  108596 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:36219/apis/events.k8s.io/v1beta1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/events: dial tcp 127.0.0.1:36219: connect: connection refused' (may retry after sleeping)
I0920 03:00:10.156710  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.428932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48694]
I0920 03:00:10.156788  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.287859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.158083  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.037455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.159309  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (877.435µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.257453  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.872117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.356963  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:10.357703  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (2.068332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.360208  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:10.361161  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:10.361168  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:10.361982  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:10.367080  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:00:10.457093  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.449654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.459045  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.468036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.463113  108596 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (3.578172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.465810  108596 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pods/pidpressure-fake-name: (1.123585ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.466329  108596 shared_informer.go:223] stop requested
E0920 03:00:10.466450  108596 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
E0920 03:00:10.466470  108596 shared_informer.go:200] unable to sync caches for taint
I0920 03:00:10.466778  108596 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30360&timeout=5m27s&timeoutSeconds=327&watch=true: (30.118192182s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48370]
I0920 03:00:10.466782  108596 node_lifecycle_controller.go:492] Shutting down node controller
I0920 03:00:10.466782  108596 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30605&timeout=7m55s&timeoutSeconds=475&watch=true: (30.112715167s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48676]
I0920 03:00:10.466804  108596 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30360&timeout=5m53s&timeoutSeconds=353&watch=true: (30.112736226s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48680]
I0920 03:00:10.466854  108596 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30361&timeout=8m7s&timeoutSeconds=487&watch=true: (30.118247943s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48672]
I0920 03:00:10.466913  108596 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30360&timeout=9m29s&timeoutSeconds=569&watch=true: (30.10555205s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48690]
I0920 03:00:10.466935  108596 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30361&timeout=7m18s&timeoutSeconds=438&watch=true: (30.116652967s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48692]
I0920 03:00:10.466972  108596 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30360&timeoutSeconds=361&watch=true: (30.220827413s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48368]
I0920 03:00:10.466983  108596 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30361&timeout=5m42s&timeoutSeconds=342&watch=true: (30.118214513s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48674]
I0920 03:00:10.467013  108596 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30360&timeout=8m5s&timeoutSeconds=485&watch=true: (30.112568313s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48688]
I0920 03:00:10.467021  108596 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30361&timeout=6m28s&timeoutSeconds=388&watch=true: (30.112213202s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48678]
I0920 03:00:10.467115  108596 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30360&timeout=9m32s&timeoutSeconds=572&watch=true: (30.112294822s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48682]
I0920 03:00:10.470135  108596 httplog.go:90] DELETE /api/v1/nodes: (3.035257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.470276  108596 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0920 03:00:10.471302  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (832.348µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0920 03:00:10.472925  108596 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.222069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
--- FAIL: TestNodePIDPressure (33.85s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-025229.xml

Find node-pid-pressure0b85e05b-a566-4ee3-8d79-57c6bfcb4176/pidpressure-fake-name mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.14s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W0920 03:01:47.581853  108596 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 03:01:47.581869  108596 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 03:01:47.581879  108596 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 03:01:47.581887  108596 master.go:259] Using reconciler: 
I0920 03:01:47.583080  108596 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.583368  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.583518  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.584184  108596 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 03:01:47.584249  108596 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 03:01:47.584268  108596 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.584803  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.584881  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.585495  108596 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 03:01:47.585598  108596 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 03:01:47.585857  108596 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.586152  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.586267  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.585965  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.586680  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.587766  108596 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 03:01:47.587805  108596 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.587949  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.587971  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.588045  108596 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 03:01:47.588859  108596 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 03:01:47.588997  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.589020  108596 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.589165  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.589183  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.589254  108596 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 03:01:47.589997  108596 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 03:01:47.590138  108596 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.590170  108596 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 03:01:47.590262  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.590286  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.591194  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.591278  108596 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 03:01:47.591367  108596 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 03:01:47.591874  108596 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.591984  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.592156  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.592180  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.592667  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.593583  108596 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 03:01:47.593829  108596 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.593633  108596 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 03:01:47.594060  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.594080  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.595358  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.596373  108596 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 03:01:47.596495  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.596600  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.596618  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.596687  108596 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 03:01:47.597417  108596 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 03:01:47.597545  108596 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.597661  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.597678  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.597776  108596 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 03:01:47.598116  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.599179  108596 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 03:01:47.599198  108596 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 03:01:47.599200  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.599339  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.599451  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.599472  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.600509  108596 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 03:01:47.600560  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.600561  108596 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 03:01:47.600701  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.600849  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.600865  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.601874  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.601927  108596 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 03:01:47.601949  108596 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 03:01:47.602564  108596 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.602878  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.603194  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.603156  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.603869  108596 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 03:01:47.603917  108596 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 03:01:47.603995  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.604091  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.604103  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.605014  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.605386  108596 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 03:01:47.605424  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.605492  108596 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 03:01:47.605599  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.605624  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.606603  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.606624  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.606805  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.607432  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.607573  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.607591  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.608253  108596 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 03:01:47.608274  108596 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 03:01:47.608635  108596 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 03:01:47.608696  108596 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.608884  108596 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.609548  108596 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.610044  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.610190  108596 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.610746  108596 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.611400  108596 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.611799  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.611921  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.612129  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.612594  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.613217  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.613438  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.614132  108596 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.614418  108596 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.614899  108596 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.615080  108596 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.615648  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.615831  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.615959  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.616080  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.616258  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.616408  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.616604  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.617032  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.617181  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.617727  108596 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.618260  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.618510  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.618761  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.619461  108596 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.619812  108596 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.620484  108596 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.621122  108596 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.621706  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.622449  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.622673  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.622780  108596 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 03:01:47.622809  108596 master.go:461] Enabling API group "authentication.k8s.io".
I0920 03:01:47.622825  108596 master.go:461] Enabling API group "authorization.k8s.io".
I0920 03:01:47.623186  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.623347  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.623383  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.624355  108596 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 03:01:47.624406  108596 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 03:01:47.624517  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.624631  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.624660  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.625462  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.625921  108596 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 03:01:47.625992  108596 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 03:01:47.626071  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.626195  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.626215  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.626837  108596 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 03:01:47.626858  108596 master.go:461] Enabling API group "autoscaling".
I0920 03:01:47.626933  108596 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 03:01:47.626956  108596 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.627047  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.627066  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.627260  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.627876  108596 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 03:01:47.627959  108596 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 03:01:47.628002  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.627999  108596 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.628097  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.628112  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.629029  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.630022  108596 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 03:01:47.630056  108596 master.go:461] Enabling API group "batch".
I0920 03:01:47.630234  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.630397  108596 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 03:01:47.630510  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.630568  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.631542  108596 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 03:01:47.631570  108596 master.go:461] Enabling API group "certificates.k8s.io".
I0920 03:01:47.631588  108596 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 03:01:47.631700  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.631819  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.631835  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.632578  108596 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 03:01:47.632715  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.632864  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.632896  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.632967  108596 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 03:01:47.633132  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.634297  108596 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 03:01:47.634347  108596 master.go:461] Enabling API group "coordination.k8s.io".
I0920 03:01:47.634360  108596 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 03:01:47.634478  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.634585  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.634605  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.634631  108596 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 03:01:47.634747  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.635040  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.635418  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.635449  108596 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 03:01:47.635421  108596 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 03:01:47.635539  108596 master.go:461] Enabling API group "extensions".
I0920 03:01:47.635711  108596 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.635895  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.635918  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.636239  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.637197  108596 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 03:01:47.637280  108596 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 03:01:47.637388  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.637741  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.637764  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.638279  108596 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 03:01:47.638301  108596 master.go:461] Enabling API group "networking.k8s.io".
I0920 03:01:47.638392  108596 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.638429  108596 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 03:01:47.638549  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.638572  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.639200  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.639228  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.639491  108596 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 03:01:47.639507  108596 master.go:461] Enabling API group "node.k8s.io".
I0920 03:01:47.639570  108596 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 03:01:47.639632  108596 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.639748  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.639769  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.640950  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.641942  108596 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 03:01:47.642010  108596 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 03:01:47.642084  108596 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.642187  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.642201  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.643427  108596 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 03:01:47.643507  108596 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 03:01:47.643527  108596 master.go:461] Enabling API group "policy".
I0920 03:01:47.643693  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.643452  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.643842  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.643866  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.644692  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.645607  108596 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 03:01:47.645649  108596 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 03:01:47.645739  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.645834  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.645850  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.646299  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.646700  108596 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 03:01:47.646906  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.647173  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.647248  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.646985  108596 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 03:01:47.648191  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.648409  108596 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 03:01:47.648506  108596 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 03:01:47.648598  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.649155  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.649185  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.649894  108596 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 03:01:47.649943  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.649967  108596 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 03:01:47.650053  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.650070  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.650815  108596 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 03:01:47.650868  108596 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 03:01:47.651285  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.651472  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.651665  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.653063  108596 watch_cache.go:405] Replace watchCache (rev: 46066) 
I0920 03:01:47.653396  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.653466  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.655646  108596 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 03:01:47.655709  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.655783  108596 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 03:01:47.655952  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.655984  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.657857  108596 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 03:01:47.658074  108596 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 03:01:47.658483  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.658700  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.658785  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.659760  108596 watch_cache.go:405] Replace watchCache (rev: 46068) 
I0920 03:01:47.659997  108596 watch_cache.go:405] Replace watchCache (rev: 46068) 
I0920 03:01:47.661307  108596 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 03:01:47.661423  108596 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 03:01:47.661848  108596 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 03:01:47.662497  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.663823  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.663946  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.663972  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.664664  108596 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 03:01:47.664731  108596 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 03:01:47.664820  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.665244  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.665385  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.665419  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.666238  108596 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 03:01:47.666256  108596 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 03:01:47.666326  108596 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 03:01:47.666383  108596 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 03:01:47.667207  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.667353  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.667382  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.666969  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.668328  108596 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 03:01:47.668383  108596 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 03:01:47.668478  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.669228  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.669825  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.669855  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.670461  108596 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 03:01:47.670497  108596 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.670506  108596 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 03:01:47.670607  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.670630  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.671208  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.671377  108596 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 03:01:47.671463  108596 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.671496  108596 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 03:01:47.671615  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.671667  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.672490  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.672832  108596 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 03:01:47.672875  108596 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 03:01:47.673084  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.673309  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.673485  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.673666  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.674397  108596 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 03:01:47.674523  108596 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 03:01:47.674646  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.674772  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.674798  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.675241  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.675404  108596 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 03:01:47.675432  108596 master.go:461] Enabling API group "storage.k8s.io".
I0920 03:01:47.675475  108596 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 03:01:47.675683  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.675826  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.675850  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.676224  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.676377  108596 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 03:01:47.676485  108596 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 03:01:47.676586  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.676814  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.676840  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.677391  108596 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 03:01:47.677555  108596 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.677663  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.677669  108596 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 03:01:47.677680  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.678600  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.678638  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.678653  108596 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 03:01:47.678752  108596 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 03:01:47.678789  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.678924  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.678958  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.679661  108596 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 03:01:47.679724  108596 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 03:01:47.679785  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.679795  108596 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.679895  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.679914  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.680864  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.681561  108596 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 03:01:47.681590  108596 master.go:461] Enabling API group "apps".
I0920 03:01:47.681610  108596 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 03:01:47.681624  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.681748  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.681770  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.682374  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.682603  108596 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 03:01:47.682653  108596 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 03:01:47.682647  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.682781  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.682811  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.683482  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.683646  108596 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 03:01:47.683687  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.683711  108596 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 03:01:47.683803  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.683824  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.684800  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.684824  108596 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 03:01:47.684848  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.684860  108596 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 03:01:47.684933  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.684946  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.685573  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.685769  108596 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 03:01:47.685824  108596 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 03:01:47.685841  108596 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 03:01:47.685869  108596 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.686142  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:47.686160  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:47.686621  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.686752  108596 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 03:01:47.686807  108596 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 03:01:47.686810  108596 master.go:461] Enabling API group "events.k8s.io".
I0920 03:01:47.687287  108596 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.687647  108596 watch_cache.go:405] Replace watchCache (rev: 46069) 
I0920 03:01:47.687703  108596 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.688000  108596 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.688080  108596 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.688144  108596 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.688206  108596 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.688370  108596 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.688475  108596 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.688569  108596 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.688657  108596 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.689234  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.689512  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.690250  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.690515  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.691432  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.691683  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.692216  108596 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.692471  108596 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.693101  108596 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.693342  108596 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:01:47.693419  108596 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 03:01:47.693886  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.694066  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.694276  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.694875  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.695506  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.696125  108596 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.696424  108596 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.697067  108596 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.697653  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.697906  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.698437  108596 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:01:47.698508  108596 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 03:01:47.699085  108596 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.699376  108596 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.699852  108596 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.700408  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.700813  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.701383  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.701854  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.702356  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.702791  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.703237  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.703654  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:01:47.703717  108596 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0920 03:01:47.704116  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.704673  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:01:47.704723  108596 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0920 03:01:47.705244  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.705766  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.705968  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.706358  108596 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.706697  108596 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.707181  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.707589  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:01:47.707651  108596 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0920 03:01:47.708229  108596 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.708732  108596 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.708995  108596 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.709612  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.709858  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.710117  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.710699  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.710963  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.711227  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.711880  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.712142  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.712436  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:01:47.712504  108596 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0920 03:01:47.712511  108596 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0920 03:01:47.713016  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.713628  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.714083  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.714568  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.715115  108596 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b2512eb4-3052-4b62-adce-7fd68f18fbc5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:01:47.717640  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:47.717668  108596 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0920 03:01:47.717675  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:47.717682  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:47.717688  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:47.717693  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:47.717745  108596 httplog.go:90] GET /healthz: (207.507µs) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:47.718765  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.203749ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:47.720874  108596 httplog.go:90] GET /api/v1/services: (902.989µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:47.724338  108596 httplog.go:90] GET /api/v1/services: (783.431µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:47.726006  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:47.726033  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:47.726041  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:47.726047  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:47.726053  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:47.726078  108596 httplog.go:90] GET /healthz: (158.221µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:47.726958  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.036265ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:47.727518  108596 httplog.go:90] GET /api/v1/services: (931.929µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:47.727882  108596 httplog.go:90] GET /api/v1/services: (691.691µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:47.728853  108596 httplog.go:90] POST /api/v1/namespaces: (1.495909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53188]
I0920 03:01:47.732230  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.026244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:47.737137  108596 httplog.go:90] POST /api/v1/namespaces: (4.544788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:47.738142  108596 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (692.975µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:47.739699  108596 httplog.go:90] POST /api/v1/namespaces: (1.226247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:47.818500  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:47.818533  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:47.818543  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:47.818550  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:47.818556  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:47.818583  108596 httplog.go:90] GET /healthz: (198.934µs) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:47.826686  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:47.826720  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:47.826739  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:47.826746  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:47.826751  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:47.826772  108596 httplog.go:90] GET /healthz: (212.823µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:47.918586  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:47.918619  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:47.918628  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:47.918635  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:47.918642  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:47.918671  108596 httplog.go:90] GET /healthz: (202.092µs) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:47.926698  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:47.926730  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:47.926740  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:47.926760  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:47.926768  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:47.926799  108596 httplog.go:90] GET /healthz: (229.698µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.018533  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.018575  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.018585  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.018595  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.018629  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.018663  108596 httplog.go:90] GET /healthz: (301.59µs) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:48.026689  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.026721  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.026733  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.026770  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.026778  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.026805  108596 httplog.go:90] GET /healthz: (229.509µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.118477  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.118511  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.118520  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.118526  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.118532  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.118592  108596 httplog.go:90] GET /healthz: (226.723µs) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:48.126701  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.126739  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.126752  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.126812  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.126820  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.126847  108596 httplog.go:90] GET /healthz: (297.176µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.218453  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.218486  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.218495  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.218505  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.218513  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.218544  108596 httplog.go:90] GET /healthz: (238.607µs) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:48.226648  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.226688  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.226697  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.226702  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.226708  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.226735  108596 httplog.go:90] GET /healthz: (194.751µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.318470  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.318507  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.318520  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.318529  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.318536  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.318575  108596 httplog.go:90] GET /healthz: (260.663µs) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:48.326638  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.326665  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.326673  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.326679  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.326690  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.326720  108596 httplog.go:90] GET /healthz: (199.143µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.418483  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.418520  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.418533  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.418540  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.418546  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.418580  108596 httplog.go:90] GET /healthz: (206.875µs) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:48.426700  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.426740  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.426754  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.426762  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.426770  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.426814  108596 httplog.go:90] GET /healthz: (231.987µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.518529  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.518569  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.518581  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.518590  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.518598  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.518635  108596 httplog.go:90] GET /healthz: (333.192µs) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:48.526630  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:01:48.526661  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.526676  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.526684  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.526689  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.526718  108596 httplog.go:90] GET /healthz: (197.633µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.581819  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:01:48.581898  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:01:48.619350  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.619378  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.619384  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.619390  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.619444  108596 httplog.go:90] GET /healthz: (1.145509ms) 0 [Go-http-client/1.1 127.0.0.1:53184]
I0920 03:01:48.627200  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.627221  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.627228  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.627233  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.627268  108596 httplog.go:90] GET /healthz: (721.416µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.718900  108596 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.211248ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:48.718981  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.299346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.719790  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.802897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53446]
I0920 03:01:48.719718  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.719902  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.719914  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:01:48.719923  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:01:48.719963  108596 httplog.go:90] GET /healthz: (1.360829ms) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:48.720881  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (787.54µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53446]
I0920 03:01:48.720940  108596 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (974.153µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0920 03:01:48.722654  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.482225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:48.722838  108596 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.565984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.723665  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (676.055µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:48.724692  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (679.101µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:48.725459  108596 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.66962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.725681  108596 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0920 03:01:48.725998  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (841.113µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:48.726533  108596 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (693.726µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.727346  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.727364  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:01:48.727371  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:48.727402  108596 httplog.go:90] GET /healthz: (679.449µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:48.727577  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.299995ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:48.728458  108596 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.532365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.728566  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (679.036µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53186]
I0920 03:01:48.728568  108596 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0920 03:01:48.728582  108596 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0920 03:01:48.730268  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.166471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.733463  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (618.445µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.735111  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.276402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.735415  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0920 03:01:48.736343  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (678.942µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.737916  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.15183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.738259  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0920 03:01:48.739263  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (595.625µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.741065  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.269908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.741279  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0920 03:01:48.742169  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (587.616µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.743672  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.108706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.743854  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0920 03:01:48.744705  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (671.671µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.746234  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.061167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.746632  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0920 03:01:48.747649  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (808.461µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.749395  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.413637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.749612  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0920 03:01:48.750516  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (717.408µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.751974  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.098754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.752138  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0920 03:01:48.753058  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (659.768µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.754882  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.271698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.755180  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0920 03:01:48.756175  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (652.108µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.758095  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.415194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.758438  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0920 03:01:48.759521  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (923.992µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.761136  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.175869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.761500  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0920 03:01:48.762551  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (713.828µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.764043  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.178228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.764355  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0920 03:01:48.765129  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (611.309µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.767281  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.663393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.767763  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0920 03:01:48.768619  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (680.494µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.770034  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.084946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.770425  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0920 03:01:48.771417  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (699.719µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.772952  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.163394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.773217  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0920 03:01:48.774301  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (773.726µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.776349  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.178455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.776576  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0920 03:01:48.777492  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (731.764µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.778862  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.055631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.779132  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0920 03:01:48.780087  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (780.398µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.781639  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.186972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.781868  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0920 03:01:48.782883  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (783.923µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.784664  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.248679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.786590  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 03:01:48.787424  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (645.064µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.792796  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.073071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.793070  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0920 03:01:48.794431  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (921.434µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.795960  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.108501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.796364  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0920 03:01:48.797128  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (566.555µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.798674  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.202179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.798939  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0920 03:01:48.800001  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (797.242µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.801651  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.209684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.801795  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0920 03:01:48.802728  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (767.084µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.804037  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.001543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.804185  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0920 03:01:48.804930  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (621.744µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.806488  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.227775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.806742  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0920 03:01:48.807664  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (692.408µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.809067  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.002638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.809253  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0920 03:01:48.810356  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (908.453µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.812064  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.373789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.812292  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0920 03:01:48.813169  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (670.083µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.814678  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.145577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.815025  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0920 03:01:48.815782  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (587.477µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.817711  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.547276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.817896  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 03:01:48.818904  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.819002  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:48.819136  108596 httplog.go:90] GET /healthz: (890.413µs) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:48.818965  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (852.185µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.820752  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.130103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.820956  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 03:01:48.821890  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (681.632µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.823728  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.337274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.823985  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 03:01:48.825031  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (779.152µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.826653  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.167158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.826983  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 03:01:48.827081  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.827108  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:48.827144  108596 httplog.go:90] GET /healthz: (598.394µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:48.827965  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (827.807µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.830002  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.656868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.830465  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 03:01:48.831940  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.2214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.835537  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.924813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.835816  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 03:01:48.837170  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (923.111µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.839161  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.348906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.839403  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 03:01:48.840538  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (946.381µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.842380  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.273642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.842556  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 03:01:48.843475  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (628.42µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.845031  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.126942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.845212  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 03:01:48.846015  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (637.854µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.847673  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.177282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.847979  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 03:01:48.849125  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (905.034µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.851004  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.40087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.851359  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0920 03:01:48.852495  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (864.857µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.854224  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.351085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.854562  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 03:01:48.855832  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (909.932µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.857344  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.134275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.857555  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0920 03:01:48.858678  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (945.624µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.860439  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.330103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.860776  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 03:01:48.861724  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (685.389µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.863625  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.480108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.863902  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 03:01:48.864770  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (634.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.866468  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.426346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.866663  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 03:01:48.867670  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (834.142µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.869295  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.319578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.869585  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 03:01:48.870448  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (601.05µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.871913  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.081786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.872108  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 03:01:48.873134  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (825.308µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.874441  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (982.747µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.874722  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0920 03:01:48.875588  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (724.621µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.876926  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (946.583µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.877373  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 03:01:48.878304  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (755.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.879919  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.206904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.880090  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0920 03:01:48.880841  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (589.794µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.882436  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.165284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.882624  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 03:01:48.883757  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (948.696µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.899584  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.624068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.899912  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 03:01:48.919222  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.919258  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:48.919285  108596 httplog.go:90] GET /healthz: (1.030037ms) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:48.919559  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.66166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.927364  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:48.927487  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:48.927645  108596 httplog.go:90] GET /healthz: (999.073µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.939641  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.670936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.940038  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 03:01:48.959039  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.079994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.981531  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.483181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:48.981736  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 03:01:48.999037  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.136081ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.019163  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.019199  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.019233  108596 httplog.go:90] GET /healthz: (975.405µs) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:49.020120  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.164894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.020428  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 03:01:49.027341  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.027369  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.027409  108596 httplog.go:90] GET /healthz: (809.366µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.038935  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (955.972µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.059750  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.790147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.060065  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0920 03:01:49.078969  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.005415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.100367  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.417297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.100713  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0920 03:01:49.118953  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.025196ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.119282  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.119329  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.119491  108596 httplog.go:90] GET /healthz: (1.002142ms) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:49.127369  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.127395  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.127435  108596 httplog.go:90] GET /healthz: (846.82µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.139579  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.605676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.139775  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0920 03:01:49.158935  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.020133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.179625  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.685451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.179882  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0920 03:01:49.199183  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.158524ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.219085  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.219117  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.219172  108596 httplog.go:90] GET /healthz: (873.62µs) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:49.219537  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.564796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.219788  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0920 03:01:49.227290  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.227437  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.227624  108596 httplog.go:90] GET /healthz: (1.055178ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.239436  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (881.732µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.259700  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.761347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.259977  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 03:01:49.279177  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.232057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.301132  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.731461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.301486  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0920 03:01:49.319297  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.339535ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.319479  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.319511  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.319554  108596 httplog.go:90] GET /healthz: (1.262043ms) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:49.327186  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.327209  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.327250  108596 httplog.go:90] GET /healthz: (733.381µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.339710  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.717318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.340024  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0920 03:01:49.358817  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (938.684µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.380473  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.714501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.380717  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0920 03:01:49.398862  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (917.711µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.419500  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.588079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.419641  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.419667  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.419717  108596 httplog.go:90] GET /healthz: (1.467328ms) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:49.419726  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0920 03:01:49.427147  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.427293  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.427539  108596 httplog.go:90] GET /healthz: (943.9µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.438783  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (840.8µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.459531  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.609446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.459768  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
E0920 03:01:49.477641  108596 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:36219/apis/events.k8s.io/v1beta1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/events: dial tcp 127.0.0.1:36219: connect: connection refused' (may retry after sleeping)
E0920 03:01:49.477676  108596 event_broadcaster.go:194] Unable to write event '&v1beta1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"signalling-pod.15c6063f3524861e", GenerateName:"", Namespace:"permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xbf592e092b7ae690, ext:81326980617, loc:(*time.Location)(0xabb7da0)}}, Series:(*v1beta1.EventSeries)(nil), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-fce0489f56d7", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1", Name:"signalling-pod", UID:"56aacda7-fac6-46c0-bc33-e002b3c8acf1", APIVersion:"v1", ResourceVersion:"28656", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"Binding rejected: Post http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod/binding: dial tcp 127.0.0.1:36219: connect: connection refused", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"default-scheduler", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}' (retry limit exceeded!)
I0920 03:01:49.478850  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.004879ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.499576  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.617198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.499850  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 03:01:49.519012  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.075173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.519655  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.519684  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.519731  108596 httplog.go:90] GET /healthz: (1.497624ms) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:49.527449  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.527474  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.527535  108596 httplog.go:90] GET /healthz: (956.18µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.540220  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.951322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.540464  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 03:01:49.559137  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.034028ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.579706  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.675769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.579964  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 03:01:49.599553  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.531352ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.619108  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.619140  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.619176  108596 httplog.go:90] GET /healthz: (807.454µs) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:49.619856  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.914037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.620144  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 03:01:49.627238  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.627683  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.627880  108596 httplog.go:90] GET /healthz: (1.338067ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.638792  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (914.551µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.659413  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.494514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.659671  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 03:01:49.679181  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.217655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.699408  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.475971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.699643  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 03:01:49.719135  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.121693ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.719173  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.719194  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.719227  108596 httplog.go:90] GET /healthz: (936.414µs) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:49.727213  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.727241  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.727283  108596 httplog.go:90] GET /healthz: (708.552µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.740805  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.561364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.741039  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 03:01:49.758852  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (912.609µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.779614  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.647851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.780416  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 03:01:49.799199  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.203943ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.819103  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.819130  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.819168  108596 httplog.go:90] GET /healthz: (767.794µs) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:49.819716  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.786126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.819953  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 03:01:49.827521  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.827555  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.827591  108596 httplog.go:90] GET /healthz: (962.307µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.838905  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.011657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.859849  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.869716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.860094  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0920 03:01:49.879179  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.118751ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.899901  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.876375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.900141  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 03:01:49.919076  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.037887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:49.919444  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.919474  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.919512  108596 httplog.go:90] GET /healthz: (1.205328ms) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:49.927268  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:49.927286  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:49.927351  108596 httplog.go:90] GET /healthz: (751.244µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.939963  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.774425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.940207  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0920 03:01:49.959002  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.020995ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.980517  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.543117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:49.980770  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 03:01:49.999758  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.289736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.019506  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.019549  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.019586  108596 httplog.go:90] GET /healthz: (1.091437ms) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:50.019951  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.953344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.020280  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 03:01:50.027478  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.027587  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.027842  108596 httplog.go:90] GET /healthz: (1.227034ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.038968  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.018438ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.059602  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.618241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.060136  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 03:01:50.079199  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.272022ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.099862  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.815391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.100118  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 03:01:50.119098  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.135703ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.120567  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.120600  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.120633  108596 httplog.go:90] GET /healthz: (2.353456ms) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:50.127348  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.127376  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.127415  108596 httplog.go:90] GET /healthz: (787.547µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.139766  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.796061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.140597  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 03:01:50.158962  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.034179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.180037  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.196539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.180387  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0920 03:01:50.199242  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.315572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.219783  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.219811  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.219864  108596 httplog.go:90] GET /healthz: (1.638708ms) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:50.219888  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.781429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.220067  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 03:01:50.227232  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.227267  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.227295  108596 httplog.go:90] GET /healthz: (775.92µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.238785  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (872.011µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.259943  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.0119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.260149  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0920 03:01:50.279187  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.197634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.300214  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.233555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.300489  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 03:01:50.318898  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.015826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.319059  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.319083  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.319114  108596 httplog.go:90] GET /healthz: (837.603µs) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:50.327153  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.327209  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.327240  108596 httplog.go:90] GET /healthz: (755.486µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.339448  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.600899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.339694  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 03:01:50.358715  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (834.688µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.380006  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.014318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.380216  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 03:01:50.398989  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (892.969µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.419040  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.419064  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.419090  108596 httplog.go:90] GET /healthz: (909.567µs) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:50.419350  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.430487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.419540  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 03:01:50.427367  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.427396  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.427434  108596 httplog.go:90] GET /healthz: (883.606µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.438978  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.079337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.459954  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.944976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.460148  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 03:01:50.479014  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.032946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.480685  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.166367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.499644  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.670586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.499954  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0920 03:01:50.519122  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.519281  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.519355  108596 httplog.go:90] GET /healthz: (1.123481ms) 0 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:50.519511  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.578265ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53448]
I0920 03:01:50.521491  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.201869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.527367  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.527391  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.527422  108596 httplog.go:90] GET /healthz: (861.516µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.539558  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.625442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.539765  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 03:01:50.558982  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.03256ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.560963  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.278752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.579575  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.643706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.579818  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 03:01:50.599003  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.057228ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.600805  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.283463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.619178  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.619224  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.619258  108596 httplog.go:90] GET /healthz: (1.005246ms) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:50.619882  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.931974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.620053  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 03:01:50.627455  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.627478  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.627508  108596 httplog.go:90] GET /healthz: (975.595µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.639084  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.113643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.641256  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.764261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.659911  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.88635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.660127  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 03:01:50.680127  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.187268ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.681900  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.289054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.699684  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.704627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.699942  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 03:01:50.719409  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.719442  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.719473  108596 httplog.go:90] GET /healthz: (1.097768ms) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:50.719411  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.427033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.721284  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.33887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.727350  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.727383  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.727449  108596 httplog.go:90] GET /healthz: (864.33µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.740072  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.055132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.740347  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 03:01:50.759079  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.122658ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.760745  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.166916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.780127  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.161821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.780416  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0920 03:01:50.799369  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.149202ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.801249  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.345293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.819556  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.819618  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.819717  108596 httplog.go:90] GET /healthz: (1.314199ms) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:50.820504  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.410065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.820709  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 03:01:50.827944  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.827973  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.828035  108596 httplog.go:90] GET /healthz: (1.072983ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.839054  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.086862ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.841140  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.575639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.860137  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.197934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.860478  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 03:01:50.879977  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.903451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.881718  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.295101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.899733  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.67686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.900258  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 03:01:50.919018  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.919257  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.919498  108596 httplog.go:90] GET /healthz: (1.284644ms) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:50.919529  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.552805ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.921519  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.485326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.927400  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:50.927430  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:50.927471  108596 httplog.go:90] GET /healthz: (930.375µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.939569  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.644028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.939824  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 03:01:50.959391  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.325581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.961157  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.332027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.979977  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.999212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:50.980220  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 03:01:50.999530  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.546919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.001109  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.09553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.019104  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:01:51.019132  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:01:51.019171  108596 httplog.go:90] GET /healthz: (982.169µs) 0 [Go-http-client/1.1 127.0.0.1:53448]
I0920 03:01:51.020283  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.339061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.020664  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 03:01:51.027631  108596 httplog.go:90] GET /healthz: (1.064676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.033584  108596 httplog.go:90] GET /api/v1/namespaces/default: (5.511331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.036528  108596 httplog.go:90] POST /api/v1/namespaces: (2.498872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.037835  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (948.829µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.041470  108596 httplog.go:90] POST /api/v1/namespaces/default/services: (3.169258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.042901  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (918.948µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.044036  108596 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (738.662µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
E0920 03:01:51.044237  108596 controller.go:224] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0920 03:01:51.119783  108596 httplog.go:90] GET /healthz: (1.389439ms) 200 [Go-http-client/1.1 127.0.0.1:53450]
I0920 03:01:51.123044  108596 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.037936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
W0920 03:01:51.123367  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.123502  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.123664  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.123766  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.123846  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.123918  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.124001  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.124074  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.124180  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.124256  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:01:51.124413  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 03:01:51.125925  108596 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.210313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.127417  108596 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0920 03:01:51.127480  108596 factory.go:321] Registering predicate: PredicateOne
I0920 03:01:51.127490  108596 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0920 03:01:51.127497  108596 factory.go:321] Registering predicate: PredicateTwo
I0920 03:01:51.127525  108596 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0920 03:01:51.127534  108596 factory.go:336] Registering priority: PriorityOne
I0920 03:01:51.127542  108596 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0920 03:01:51.127554  108596 factory.go:336] Registering priority: PriorityTwo
I0920 03:01:51.127559  108596 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0920 03:01:51.127567  108596 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0920 03:01:51.129422  108596 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.431633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
W0920 03:01:51.129952  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 03:01:51.131869  108596 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (1.301767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.132153  108596 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0920 03:01:51.133438  108596 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0920 03:01:51.133507  108596 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0920 03:01:51.133722  108596 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 03:01:51.136141  108596 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.518968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
W0920 03:01:51.136441  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 03:01:51.137881  108596 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (1.050145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.138228  108596 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0920 03:01:51.138383  108596 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0920 03:01:51.140964  108596 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.19583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
W0920 03:01:51.141465  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 03:01:51.143043  108596 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (1.161962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.143709  108596 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0920 03:01:51.143823  108596 factory.go:321] Registering predicate: PredicateOne
I0920 03:01:51.143866  108596 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0920 03:01:51.143930  108596 factory.go:321] Registering predicate: PredicateTwo
I0920 03:01:51.144008  108596 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0920 03:01:51.144082  108596 factory.go:336] Registering priority: PriorityOne
I0920 03:01:51.144265  108596 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0920 03:01:51.144376  108596 factory.go:336] Registering priority: PriorityTwo
I0920 03:01:51.144482  108596 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0920 03:01:51.144575  108596 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0920 03:01:51.146430  108596 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.286284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
W0920 03:01:51.146688  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 03:01:51.147932  108596 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (826.878µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.148247  108596 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0920 03:01:51.148280  108596 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0920 03:01:51.148291  108596 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0920 03:01:51.148297  108596 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 03:01:51.320986  108596 request.go:538] Throttling request took 172.429669ms, request: POST:http://127.0.0.1:43909/api/v1/namespaces/kube-system/configmaps
I0920 03:01:51.323973  108596 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.597613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
W0920 03:01:51.324406  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 03:01:51.521016  108596 request.go:538] Throttling request took 196.345261ms, request: GET:http://127.0.0.1:43909/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I0920 03:01:51.522745  108596 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (1.435817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.523190  108596 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0920 03:01:51.523218  108596 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0920 03:01:51.721016  108596 request.go:538] Throttling request took 197.5331ms, request: DELETE:http://127.0.0.1:43909/api/v1/nodes
I0920 03:01:51.722766  108596 httplog.go:90] DELETE /api/v1/nodes: (1.474128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
I0920 03:01:51.722991  108596 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0920 03:01:51.725066  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.746777ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53450]
--- FAIL: TestSchedulerCreationFromConfigMap (4.14s)
    scheduler_test.go:283: Expected predicates map[PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:283: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:283: Expected predicates map[PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:283: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-025229.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m20s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0920 03:03:12.486973  108596 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (140.10s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-025229.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0920 03:04:22.584524  108596 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 03:04:22.584557  108596 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 03:04:22.584573  108596 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 03:04:22.584584  108596 master.go:259] Using reconciler: 
I0920 03:04:22.585892  108596 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.586157  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.586266  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.587357  108596 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 03:04:22.587406  108596 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 03:04:22.587402  108596 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.587774  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.587929  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.588563  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.589239  108596 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 03:04:22.589288  108596 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.589673  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.589775  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.589356  108596 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 03:04:22.590533  108596 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 03:04:22.590585  108596 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 03:04:22.590692  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.590704  108596 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.591065  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.591186  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.591243  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.592001  108596 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 03:04:22.592069  108596 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 03:04:22.592132  108596 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.592780  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.592808  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.592858  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.593785  108596 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 03:04:22.593837  108596 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 03:04:22.593922  108596 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.594546  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.594631  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.594549  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.595186  108596 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 03:04:22.595276  108596 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 03:04:22.595303  108596 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.595633  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.595698  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.595987  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.596233  108596 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 03:04:22.596363  108596 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 03:04:22.596521  108596 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.596817  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.596918  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.597117  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.597602  108596 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 03:04:22.597671  108596 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 03:04:22.597786  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.598000  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.598026  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.598279  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.598541  108596 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 03:04:22.598576  108596 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 03:04:22.598702  108596 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.598909  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.598933  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.599124  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.599661  108596 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 03:04:22.599722  108596 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 03:04:22.599798  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.600002  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.600029  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.600456  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.600654  108596 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 03:04:22.600731  108596 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 03:04:22.600839  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.601028  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.601060  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.601667  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.601821  108596 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 03:04:22.601858  108596 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 03:04:22.601964  108596 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.602208  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.602235  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.602661  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.602827  108596 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 03:04:22.602875  108596 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 03:04:22.602982  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.603162  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.603191  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.603787  108596 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 03:04:22.603812  108596 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 03:04:22.603822  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.604299  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.604351  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.604547  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.604809  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.605290  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.605353  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.605928  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.606135  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.606159  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.606766  108596 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 03:04:22.606793  108596 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 03:04:22.606817  108596 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 03:04:22.607408  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.607515  108596 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.607981  108596 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.608496  108596 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.608979  108596 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.609463  108596 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.609899  108596 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.610165  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.610238  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.610428  108596 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.610867  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.611280  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.611426  108596 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.611931  108596 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.612142  108596 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.612649  108596 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.612836  108596 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.613551  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.613745  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.613864  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.614003  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.614165  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.614250  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.614374  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.614900  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.615150  108596 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.615735  108596 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.616223  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.616466  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.616648  108596 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.617164  108596 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.617373  108596 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.617804  108596 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.618399  108596 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.618850  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.619467  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.619638  108596 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.619740  108596 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 03:04:22.619762  108596 master.go:461] Enabling API group "authentication.k8s.io".
I0920 03:04:22.619774  108596 master.go:461] Enabling API group "authorization.k8s.io".
I0920 03:04:22.619877  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.620048  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.620072  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.620795  108596 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 03:04:22.620876  108596 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 03:04:22.620963  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.621215  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.621243  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.621703  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.621784  108596 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 03:04:22.621804  108596 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 03:04:22.621976  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.622161  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.622179  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.622584  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.623130  108596 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 03:04:22.623153  108596 master.go:461] Enabling API group "autoscaling".
I0920 03:04:22.623228  108596 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 03:04:22.623291  108596 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.623532  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.623554  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.623935  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.624365  108596 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 03:04:22.624466  108596 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 03:04:22.624493  108596 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.624699  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.624723  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.625499  108596 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 03:04:22.625523  108596 master.go:461] Enabling API group "batch".
I0920 03:04:22.625587  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.625623  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.625809  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.625818  108596 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 03:04:22.625841  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.626524  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.626744  108596 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 03:04:22.626774  108596 master.go:461] Enabling API group "certificates.k8s.io".
I0920 03:04:22.626794  108596 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 03:04:22.626910  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.627088  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.627107  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.627544  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.627635  108596 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 03:04:22.627672  108596 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 03:04:22.627766  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.627982  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.628010  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.628440  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.628704  108596 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 03:04:22.628726  108596 master.go:461] Enabling API group "coordination.k8s.io".
I0920 03:04:22.628739  108596 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 03:04:22.628805  108596 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 03:04:22.628860  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.629083  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.629133  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.629614  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.629689  108596 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 03:04:22.629730  108596 master.go:461] Enabling API group "extensions".
I0920 03:04:22.629773  108596 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 03:04:22.629900  108596 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.630153  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.630176  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.630414  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.630987  108596 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 03:04:22.631017  108596 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 03:04:22.631102  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.631310  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.631369  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.631588  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.631834  108596 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 03:04:22.631855  108596 master.go:461] Enabling API group "networking.k8s.io".
I0920 03:04:22.631877  108596 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.631916  108596 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 03:04:22.632016  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.632033  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.632669  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.632861  108596 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 03:04:22.632899  108596 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 03:04:22.632909  108596 master.go:461] Enabling API group "node.k8s.io".
I0920 03:04:22.633034  108596 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.633234  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.633266  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.633566  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.633770  108596 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 03:04:22.633796  108596 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 03:04:22.633916  108596 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.634094  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.634115  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.634496  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.634587  108596 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 03:04:22.634602  108596 master.go:461] Enabling API group "policy".
I0920 03:04:22.634629  108596 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 03:04:22.634632  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.634875  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.634898  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.635512  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.635919  108596 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 03:04:22.635973  108596 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 03:04:22.636172  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.636397  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.636420  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.636733  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.636947  108596 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 03:04:22.636978  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.637002  108596 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 03:04:22.637232  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.637251  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.637760  108596 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 03:04:22.637771  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.637844  108596 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 03:04:22.638199  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.638476  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.638499  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.638814  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.639309  108596 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 03:04:22.639371  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.639517  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.639537  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.639591  108596 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 03:04:22.640216  108596 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 03:04:22.640286  108596 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 03:04:22.640395  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.640498  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.640587  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.640629  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.641102  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.641267  108596 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 03:04:22.641310  108596 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 03:04:22.641303  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.641528  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.641555  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.642005  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.642092  108596 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 03:04:22.642220  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.642423  108596 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 03:04:22.642447  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.642471  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.643026  108596 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 03:04:22.643076  108596 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 03:04:22.643142  108596 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 03:04:22.643247  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.644050  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.645576  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.645804  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.645824  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.646233  108596 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 03:04:22.646298  108596 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 03:04:22.646399  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.646575  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.646594  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.646985  108596 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 03:04:22.646997  108596 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 03:04:22.647096  108596 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 03:04:22.647140  108596 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 03:04:22.647216  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.647283  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.647414  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.647436  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.647875  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.648230  108596 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 03:04:22.648344  108596 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 03:04:22.648386  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.648731  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.648754  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.649058  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.649474  108596 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 03:04:22.649519  108596 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.649541  108596 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 03:04:22.649674  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.649686  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.650299  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.650772  108596 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 03:04:22.650810  108596 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.650845  108596 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 03:04:22.651235  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.651263  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.651378  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.651907  108596 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 03:04:22.651970  108596 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 03:04:22.652013  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.652162  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.652176  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.652878  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.653461  108596 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 03:04:22.653525  108596 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 03:04:22.653586  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.653826  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.653854  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.654223  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.654460  108596 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 03:04:22.654485  108596 master.go:461] Enabling API group "storage.k8s.io".
I0920 03:04:22.654626  108596 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 03:04:22.654626  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.654798  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.654811  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.655259  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.655947  108596 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 03:04:22.655979  108596 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 03:04:22.656121  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.656327  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.656352  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.657128  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.657790  108596 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 03:04:22.657841  108596 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 03:04:22.657940  108596 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.658538  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.658558  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.658581  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.659348  108596 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 03:04:22.659422  108596 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 03:04:22.659490  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.659700  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.659728  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.660366  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.660514  108596 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 03:04:22.660612  108596 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 03:04:22.660717  108596 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.660936  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.660959  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.661487  108596 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 03:04:22.661508  108596 master.go:461] Enabling API group "apps".
I0920 03:04:22.661538  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.661577  108596 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 03:04:22.661739  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.661760  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.661835  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.662472  108596 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 03:04:22.662513  108596 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 03:04:22.662513  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.662552  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.662839  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.662872  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.663133  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.663787  108596 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 03:04:22.663829  108596 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 03:04:22.663824  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.664019  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.664042  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.664689  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.664888  108596 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 03:04:22.664919  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.664977  108596 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 03:04:22.665097  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.665123  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.665690  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.665981  108596 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 03:04:22.666000  108596 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 03:04:22.666004  108596 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 03:04:22.666036  108596 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.666288  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:22.666339  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:22.667095  108596 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 03:04:22.667117  108596 master.go:461] Enabling API group "events.k8s.io".
I0920 03:04:22.667137  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.667180  108596 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 03:04:22.667300  108596 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.667567  108596 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.667873  108596 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.667993  108596 watch_cache.go:405] Replace watchCache (rev: 59815) 
I0920 03:04:22.668014  108596 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.668125  108596 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.668237  108596 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.668461  108596 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.668602  108596 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.668727  108596 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.668837  108596 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.669637  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.669893  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.670694  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.670957  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.671652  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.671930  108596 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.672556  108596 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.672762  108596 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.673445  108596 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.673659  108596 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:04:22.673711  108596 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 03:04:22.674327  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.674454  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.674667  108596 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.675331  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.675892  108596 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.676511  108596 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.676748  108596 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.677369  108596 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.677919  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.678117  108596 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.678634  108596 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:04:22.678692  108596 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 03:04:22.679384  108596 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.679619  108596 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.680051  108596 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.680552  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.680947  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.681561  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.682077  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.682662  108596 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.683104  108596 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.683617  108596 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.684138  108596 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:04:22.684206  108596 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0920 03:04:22.684675  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.685138  108596 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:04:22.685191  108596 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0920 03:04:22.685657  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.686147  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.686407  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.686879  108596 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.687264  108596 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.687690  108596 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.688137  108596 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:04:22.688189  108596 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0920 03:04:22.688784  108596 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.689300  108596 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.689568  108596 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.690150  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.690382  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.690593  108596 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.691214  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.691451  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.691693  108596 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.692237  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.692476  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.692697  108596 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 03:04:22.692744  108596 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0920 03:04:22.692752  108596 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0920 03:04:22.693239  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.693708  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.694220  108596 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.694744  108596 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.695353  108596 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c30d22f9-3e39-4b9d-8eed-182b350fd9ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 03:04:22.697675  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:22.697713  108596 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0920 03:04:22.697720  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:22.697727  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:22.697733  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:22.697738  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:22.697760  108596 httplog.go:90] GET /healthz: (171.785µs) 0 [Go-http-client/1.1 127.0.0.1:35406]
I0920 03:04:22.698814  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.139792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I0920 03:04:22.700915  108596 httplog.go:90] GET /api/v1/services: (926.653µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I0920 03:04:22.704246  108596 httplog.go:90] GET /api/v1/services: (721.129µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I0920 03:04:22.705850  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:22.705881  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:22.705892  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:22.705900  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:22.705910  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:22.705934  108596 httplog.go:90] GET /healthz: (151.059µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I0920 03:04:22.706770  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (840.064µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0920 03:04:22.706967  108596 httplog.go:90] GET /api/v1/services: (678.726µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I0920 03:04:22.707371  108596 httplog.go:90] GET /api/v1/services: (654.335µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:22.708140  108596 httplog.go:90] POST /api/v1/namespaces: (1.061271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I0920 03:04:22.709227  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (662.833µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:22.710563  108596 httplog.go:90] POST /api/v1/namespaces: (1.011432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:22.711587  108596 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (771.586µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:22.712884  108596 httplog.go:90] POST /api/v1/namespaces: (989.046µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:22.798403  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:22.798440  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:22.798449  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:22.798455  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:22.798465  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:22.798503  108596 httplog.go:90] GET /healthz: (268.385µs) 0 [Go-http-client/1.1 127.0.0.1:35410]
I0920 03:04:22.806593  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:22.806621  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:22.806630  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:22.806636  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:22.806645  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:22.806673  108596 httplog.go:90] GET /healthz: (186.672µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:22.898354  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:22.898387  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:22.898395  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:22.898402  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:22.898407  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:22.898438  108596 httplog.go:90] GET /healthz: (243.325µs) 0 [Go-http-client/1.1 127.0.0.1:35410]
I0920 03:04:22.906548  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:22.906577  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:22.906587  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:22.906594  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:22.906599  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:22.906629  108596 httplog.go:90] GET /healthz: (181.964µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:22.923225  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:22.923351  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:22.925027  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:22.925221  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:22.925808  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:22.927540  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:22.928248  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:22.998456  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:22.998503  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:22.998517  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:22.998527  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:22.998537  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:22.998575  108596 httplog.go:90] GET /healthz: (274.709µs) 0 [Go-http-client/1.1 127.0.0.1:35410]
I0920 03:04:23.006615  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.006649  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.006659  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.006665  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.006671  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.006701  108596 httplog.go:90] GET /healthz: (203.37µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.067477  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.067773  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.069510  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.069654  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.071010  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.071021  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.098459  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.098497  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.098509  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.098518  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.098527  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.098569  108596 httplog.go:90] GET /healthz: (287.573µs) 0 [Go-http-client/1.1 127.0.0.1:35410]
I0920 03:04:23.106684  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.106724  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.106736  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.106745  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.106755  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.106792  108596 httplog.go:90] GET /healthz: (250.297µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.147887  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.147920  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.147928  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.148191  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.148209  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.149186  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.198376  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.198420  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.198432  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.198441  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.198449  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.198495  108596 httplog.go:90] GET /healthz: (268.509µs) 0 [Go-http-client/1.1 127.0.0.1:35410]
I0920 03:04:23.206582  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.206611  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.206624  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.206633  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.206641  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.206676  108596 httplog.go:90] GET /healthz: (192.94µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.277414  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.298449  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.298485  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.298497  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.298504  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.298510  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.298538  108596 httplog.go:90] GET /healthz: (249.285µs) 0 [Go-http-client/1.1 127.0.0.1:35410]
I0920 03:04:23.306705  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.306740  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.306749  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.306755  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.306761  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.306793  108596 httplog.go:90] GET /healthz: (259.428µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.353044  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.398414  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.398457  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.398471  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.398481  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.398489  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.398526  108596 httplog.go:90] GET /healthz: (264.805µs) 0 [Go-http-client/1.1 127.0.0.1:35410]
I0920 03:04:23.406639  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.406670  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.406678  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.406685  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.406690  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.406720  108596 httplog.go:90] GET /healthz: (193.189µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.498428  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.498474  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.498487  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.498497  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.498505  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.498548  108596 httplog.go:90] GET /healthz: (288.026µs) 0 [Go-http-client/1.1 127.0.0.1:35410]
I0920 03:04:23.506697  108596 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 03:04:23.506743  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.506756  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.506763  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.506769  108596 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.506802  108596 httplog.go:90] GET /healthz: (222.817µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.584557  108596 client.go:361] parsed scheme: "endpoint"
I0920 03:04:23.584676  108596 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 03:04:23.599409  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.599437  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.599445  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.599450  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.599482  108596 httplog.go:90] GET /healthz: (1.211653ms) 0 [Go-http-client/1.1 127.0.0.1:35410]
I0920 03:04:23.607415  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.607437  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.607444  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.607450  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.607478  108596 httplog.go:90] GET /healthz: (971.111µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.699217  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.699417  108596 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 03:04:23.699523  108596 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 03:04:23.699231  108596 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.38576ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.699548  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.535775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.699696  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.861137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I0920 03:04:23.699581  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 03:04:23.699741  108596 httplog.go:90] GET /healthz: (1.324277ms) 0 [Go-http-client/1.1 127.0.0.1:35416]
I0920 03:04:23.700821  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (902.325µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.701126  108596 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.086956ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:23.702115  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (876.854µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.702276  108596 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.164065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.702444  108596 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0920 03:04:23.703474  108596 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.478811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:23.703808  108596 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.174587ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.704628  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.055259ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.705699  108596 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.593451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.705811  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (811.738µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I0920 03:04:23.706003  108596 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0920 03:04:23.706072  108596 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0920 03:04:23.706933  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (706.133µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.707179  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.707267  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:23.707436  108596 httplog.go:90] GET /healthz: (1.065231ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:23.708357  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.003637ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.709236  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (577.312µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.710490  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (842.573µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.711669  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (714.683µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.713285  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.21142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.713480  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0920 03:04:23.714274  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (631.203µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.715808  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.146701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.716081  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0920 03:04:23.716971  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (594.587µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.718491  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.088558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.718777  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0920 03:04:23.719570  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (628.843µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.720955  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.074004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.721157  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0920 03:04:23.721950  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (584.267µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.723415  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.164734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.723650  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0920 03:04:23.724469  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (647.412µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.725965  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.110856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.726219  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0920 03:04:23.727045  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (624.099µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.728584  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.133297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.728762  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0920 03:04:23.729858  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (950.904µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.731447  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.230549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.731613  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0920 03:04:23.732443  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (680.692µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.734016  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.213573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.734346  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0920 03:04:23.735228  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (725.345µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.736907  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.254374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.737156  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0920 03:04:23.737998  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (678.382µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.739457  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.083607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.739734  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0920 03:04:23.740614  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (706.918µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.742427  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.487617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.742671  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0920 03:04:23.743586  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (738.732µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.745058  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.072778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.745291  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0920 03:04:23.746170  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (613.383µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.747641  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.05467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.747928  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0920 03:04:23.748740  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (640.669µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.750107  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.008334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.750362  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0920 03:04:23.751207  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (690.695µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.752734  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.180203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.753016  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0920 03:04:23.754030  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (760.761µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.755495  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.093909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.755750  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0920 03:04:23.756664  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (673.508µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.758259  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.313295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.758504  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 03:04:23.759293  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (652.341µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.760710  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.0919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.760893  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0920 03:04:23.761878  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (747.043µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.763526  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.235559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.763775  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0920 03:04:23.764607  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (648.528µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.766173  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.183191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.766433  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0920 03:04:23.767236  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (608.124µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.768860  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.208002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.769145  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0920 03:04:23.770116  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (659.611µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.771606  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.094028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.771790  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0920 03:04:23.772706  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (689.546µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.774593  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.449003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.774836  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0920 03:04:23.775763  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (700.892µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.777568  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.418476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.777754  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0920 03:04:23.778731  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (787.217µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.780386  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.200865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.780637  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0920 03:04:23.781689  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (856.355µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.783724  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.559902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.783877  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0920 03:04:23.784762  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (702.172µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.786334  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.2326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.786504  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 03:04:23.787367  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (683.76µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.788843  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.1905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.788974  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 03:04:23.789936  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (797.221µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.791735  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.460188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.791944  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 03:04:23.792801  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (640.533µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.794511  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.232039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.794771  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 03:04:23.795540  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (611.035µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.796881  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.044603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.797081  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 03:04:23.797999  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (734.465µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.798749  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.798777  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:23.798819  108596 httplog.go:90] GET /healthz: (686.812µs) 0 [Go-http-client/1.1 127.0.0.1:35416]
I0920 03:04:23.799574  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.224603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.799736  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 03:04:23.800576  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (706.909µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.802083  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.192813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.802298  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 03:04:23.803109  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (600.684µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.804603  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.21876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.804861  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 03:04:23.805718  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (647.583µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.807151  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.807174  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:23.807211  108596 httplog.go:90] GET /healthz: (791.676µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:23.807399  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.311065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.807574  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 03:04:23.808573  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (792.935µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.809937  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.027083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.810185  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 03:04:23.811062  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (659.578µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.812644  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.194728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.812881  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0920 03:04:23.813793  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (696.614µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.815348  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.175238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.815577  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 03:04:23.816490  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (694.217µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.817956  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.119715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.818219  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0920 03:04:23.819081  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (638.019µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.820686  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.187614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.820981  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 03:04:23.821887  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (722.335µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.823239  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.047287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.823540  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 03:04:23.824442  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (684.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.826039  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.131433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.826260  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 03:04:23.827202  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (721.635µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.828730  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.181598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.828948  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 03:04:23.829845  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (743.708µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.831387  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.217693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.831636  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 03:04:23.832482  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (688.719µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.833852  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.017657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.834090  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0920 03:04:23.834804  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (542.818µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.836139  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.032927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.836291  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 03:04:23.837121  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (658.662µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.838667  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.149823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.838890  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0920 03:04:23.839777  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (661.408µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.841361  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.204211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.841651  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 03:04:23.859263  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.212946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.880017  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.036284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.880233  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 03:04:23.898928  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.898960  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:23.899011  108596 httplog.go:90] GET /healthz: (796.73µs) 0 [Go-http-client/1.1 127.0.0.1:35416]
I0920 03:04:23.899013  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.064934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.907349  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.907393  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:23.907440  108596 httplog.go:90] GET /healthz: (914.151µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.919909  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.981771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.920131  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 03:04:23.923403  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.923520  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.925152  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.925395  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.925975  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.927643  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.928403  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:23.939245  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.277767ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.959785  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.878337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.960038  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 03:04:23.979129  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.154294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:23.999617  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:23.999653  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:23.999688  108596 httplog.go:90] GET /healthz: (1.504427ms) 0 [Go-http-client/1.1 127.0.0.1:35416]
I0920 03:04:23.999872  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.880172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.000131  108596 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 03:04:24.007053  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.007154  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.007259  108596 httplog.go:90] GET /healthz: (797.332µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.018724  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (869.336µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.039669  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.704949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.040103  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0920 03:04:24.059218  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.196598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.067661  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.067953  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.069666  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.069806  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.071191  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.071226  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.079861  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.909636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.080078  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0920 03:04:24.099164  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.099196  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.099241  108596 httplog.go:90] GET /healthz: (1.087897ms) 0 [Go-http-client/1.1 127.0.0.1:35416]
I0920 03:04:24.099342  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.365922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.107238  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.107393  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.107525  108596 httplog.go:90] GET /healthz: (1.067595ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.119626  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.714962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.119967  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0920 03:04:24.139282  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.291212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.148094  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.148103  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.148095  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.148375  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.148381  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.149357  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.160095  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.0938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.160393  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0920 03:04:24.179178  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.204213ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.199464  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.199494  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.199531  108596 httplog.go:90] GET /healthz: (1.333237ms) 0 [Go-http-client/1.1 127.0.0.1:35416]
I0920 03:04:24.199919  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.892573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.200130  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0920 03:04:24.207375  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.207402  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.207445  108596 httplog.go:90] GET /healthz: (959.079µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.219175  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.27229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.239879  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.926223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.240117  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 03:04:24.259117  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.147897ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.277599  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.279677  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.725057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.279940  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0920 03:04:24.299264  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.343145ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.299473  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.299502  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.299632  108596 httplog.go:90] GET /healthz: (963.106µs) 0 [Go-http-client/1.1 127.0.0.1:35416]
I0920 03:04:24.307428  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.307483  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.307524  108596 httplog.go:90] GET /healthz: (1.058541ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.320208  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.268433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.320414  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0920 03:04:24.339296  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.32893ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.353232  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.360087  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.019824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.360416  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0920 03:04:24.379304  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.265494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.399243  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.399273  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.399332  108596 httplog.go:90] GET /healthz: (1.129379ms) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:24.399963  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.930321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.400209  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0920 03:04:24.407298  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.407471  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.407641  108596 httplog.go:90] GET /healthz: (1.148139ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.419147  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.187285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.440040  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.039737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.440425  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 03:04:24.459307  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.320604ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.480159  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.127633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.480587  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 03:04:24.499141  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.195553ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.499177  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.499202  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.499262  108596 httplog.go:90] GET /healthz: (1.040671ms) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:24.507211  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.507238  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.507281  108596 httplog.go:90] GET /healthz: (821.444µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.519620  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.717604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.519881  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 03:04:24.539290  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.377013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.559658  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.674406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.559920  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 03:04:24.579266  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.325815ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.599226  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.599258  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.599291  108596 httplog.go:90] GET /healthz: (1.098934ms) 0 [Go-http-client/1.1 127.0.0.1:35416]
I0920 03:04:24.600461  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.26461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.600764  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 03:04:24.607051  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.607076  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.607131  108596 httplog.go:90] GET /healthz: (722.27µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.618670  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (824.226µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.639591  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.721171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.639944  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 03:04:24.680052  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (22.113563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.696309  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (15.538531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.696565  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 03:04:24.700876  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (3.046881ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:24.702391  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.702638  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.702919  108596 httplog.go:90] GET /healthz: (2.11662ms) 0 [Go-http-client/1.1 127.0.0.1:35416]
I0920 03:04:24.707168  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.707190  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.707221  108596 httplog.go:90] GET /healthz: (847.877µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.751177  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (33.258125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.751499  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 03:04:24.753225  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.490324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.759826  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.953721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.760187  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 03:04:24.779237  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.33158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.799071  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.799107  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.799140  108596 httplog.go:90] GET /healthz: (967.387µs) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:24.799730  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.811734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.799955  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 03:04:24.807280  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.807346  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.807398  108596 httplog.go:90] GET /healthz: (930.058µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.819017  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.089525ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.841604  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.915501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.841842  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0920 03:04:24.859074  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.120655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.879935  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.931528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.880133  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 03:04:24.898801  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.898831  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.898874  108596 httplog.go:90] GET /healthz: (776.77µs) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:24.899028  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.151886ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.907707  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.907732  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.907770  108596 httplog.go:90] GET /healthz: (1.335734ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.919493  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.603197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.919819  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0920 03:04:24.923624  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.923629  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.925346  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.925584  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.926105  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.927736  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.928600  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:24.939094  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.159394ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.959686  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.755754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.959904  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 03:04:24.979214  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.252205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:24.999440  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:24.999492  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:24.999530  108596 httplog.go:90] GET /healthz: (1.393624ms) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.000031  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.079012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.000240  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 03:04:25.007399  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.007430  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.007462  108596 httplog.go:90] GET /healthz: (978.811µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.018929  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.043966ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.039742  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.831568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.040046  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 03:04:25.058715  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (816.591µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.067925  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.068117  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.069852  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.069973  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.071493  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.071522  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.079713  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.812656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.079883  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 03:04:25.099158  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.099184  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.099220  108596 httplog.go:90] GET /healthz: (1.118676ms) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.099384  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.460143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.107279  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.107308  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.107373  108596 httplog.go:90] GET /healthz: (867.938µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.119535  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.684438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.119759  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 03:04:25.138930  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.005831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.148278  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.148347  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.148357  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.148599  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.148608  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.149492  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.159667  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.668643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.159984  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0920 03:04:25.178973  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.069352ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.199045  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.199169  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.199303  108596 httplog.go:90] GET /healthz: (1.113747ms) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.199876  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.902684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.200134  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 03:04:25.207220  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.207265  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.207300  108596 httplog.go:90] GET /healthz: (843.551µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.219025  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.138058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.239385  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.467998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.239719  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0920 03:04:25.259109  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.151575ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.277782  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.279779  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.873599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.280076  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 03:04:25.298921  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.298951  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.298985  108596 httplog.go:90] GET /healthz: (809.73µs) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.299304  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.345281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.307234  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.307263  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.307295  108596 httplog.go:90] GET /healthz: (861.574µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.319643  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.748464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.319906  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 03:04:25.338952  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.079751ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.353428  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.359688  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.773596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.359986  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 03:04:25.378913  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.023227ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.399129  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.399164  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.399203  108596 httplog.go:90] GET /healthz: (1.014733ms) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.399883  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.918991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.400121  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 03:04:25.407071  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.407100  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.407147  108596 httplog.go:90] GET /healthz: (626.697µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.418780  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (869.651µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.439653  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.700499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.439955  108596 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 03:04:25.458955  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.034267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.460479  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.114815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.479551  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.67557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.479843  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0920 03:04:25.498950  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.498978  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.499005  108596 httplog.go:90] GET /healthz: (893.473µs) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.499010  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.071577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.500518  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.134669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.507066  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.507096  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.507129  108596 httplog.go:90] GET /healthz: (673.625µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.519661  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.698444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.519993  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 03:04:25.538826  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (886.946µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.540528  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.168668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.559614  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.729936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.559931  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 03:04:25.578964  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.062743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.580543  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.131162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.599084  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.599133  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.599174  108596 httplog.go:90] GET /healthz: (1.003025ms) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.599673  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.750176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.599918  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 03:04:25.607081  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.607107  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.607149  108596 httplog.go:90] GET /healthz: (832.086µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.618758  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (851.628µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.620214  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.048253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.640234  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.802398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.640512  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 03:04:25.658874  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (987.031µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.660271  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.020714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.679478  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.57276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.679701  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 03:04:25.698873  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (942.042µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.698916  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.698939  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.698976  108596 httplog.go:90] GET /healthz: (770.478µs) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.700354  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.059037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.707172  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.707222  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.707257  108596 httplog.go:90] GET /healthz: (803.798µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.719609  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.702455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.719855  108596 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 03:04:25.738954  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.05235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.740398  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.082344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.759736  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.796016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.759980  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0920 03:04:25.779001  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.07437ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.780615  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.165952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.799053  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.799088  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.799123  108596 httplog.go:90] GET /healthz: (966.618µs) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.799550  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.683257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.799735  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 03:04:25.807073  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.807134  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.807181  108596 httplog.go:90] GET /healthz: (736.796µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.818810  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (862.111µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.820382  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.130506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.839135  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.250956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.839375  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 03:04:25.859023  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.107785ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.860567  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.06518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.879611  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.669531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.879862  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 03:04:25.898889  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (990.824µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.899061  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.899095  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.899125  108596 httplog.go:90] GET /healthz: (1.005029ms) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.900379  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.014982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.907082  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.907108  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.907175  108596 httplog.go:90] GET /healthz: (727.227µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.919292  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.421415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.919547  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 03:04:25.923855  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.923870  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.925546  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.925847  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.926295  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.927925  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.928840  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:25.939054  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.178739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.940622  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.080878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.957078  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.148486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0920 03:04:25.958674  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.137059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0920 03:04:25.959142  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.369591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.959352  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 03:04:25.959985  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (990.595µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0920 03:04:25.979061  108596 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.170597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.980484  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.004864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:25.998891  108596 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 03:04:25.998924  108596 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 03:04:25.998961  108596 httplog.go:90] GET /healthz: (787.98µs) 0 [Go-http-client/1.1 127.0.0.1:35414]
I0920 03:04:25.999818  108596 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.886825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:26.000037  108596 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 03:04:26.007203  108596 httplog.go:90] GET /healthz: (700.638µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:26.008452  108596 httplog.go:90] GET /api/v1/namespaces/default: (927.467µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:26.010236  108596 httplog.go:90] POST /api/v1/namespaces: (1.460454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:26.011607  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (927.721µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:26.014803  108596 httplog.go:90] POST /api/v1/namespaces/default/services: (2.703687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:26.015994  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (764.387µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:26.017510  108596 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.192696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:26.068092  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.068296  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.070024  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.070183  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.071665  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.071666  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.099444  108596 httplog.go:90] GET /healthz: (1.056737ms) 200 [Go-http-client/1.1 127.0.0.1:35416]
W0920 03:04:26.100586  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100644  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100688  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100700  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100733  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100745  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100759  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100769  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100783  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100795  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100807  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.100870  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 03:04:26.100906  108596 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0920 03:04:26.100917  108596 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 03:04:26.101145  108596 shared_informer.go:197] Waiting for caches to sync for scheduler
I0920 03:04:26.101395  108596 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 03:04:26.101417  108596 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 03:04:26.102435  108596 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (684.045µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I0920 03:04:26.103278  108596 get.go:251] Starting watch for /api/v1/pods, rv=59815 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m31s
I0920 03:04:26.148544  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.148610  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.148656  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.148855  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.149022  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.149717  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.201345  108596 shared_informer.go:227] caches populated
I0920 03:04:26.201381  108596 shared_informer.go:204] Caches are synced for scheduler 
I0920 03:04:26.201738  108596 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201767  108596 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201771  108596 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201793  108596 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201790  108596 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201806  108596 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201842  108596 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201859  108596 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201866  108596 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201878  108596 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201904  108596 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201738  108596 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201916  108596 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.201921  108596 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.202084  108596 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.202094  108596 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.202687  108596 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.202772  108596 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.203176  108596 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (338.736µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35432]
I0920 03:04:26.203219  108596 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (408.728µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35420]
I0920 03:04:26.203180  108596 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (613.712µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I0920 03:04:26.203294  108596 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.203308  108596 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.203180  108596 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (332.157µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35438]
I0920 03:04:26.203176  108596 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (556.938µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35424]
I0920 03:04:26.203188  108596 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (541.347µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I0920 03:04:26.203501  108596 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (359.613µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35442]
I0920 03:04:26.203580  108596 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (898.869µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0920 03:04:26.203729  108596 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.069284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I0920 03:04:26.204029  108596 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59815 labels= fields= timeout=5m3s
I0920 03:04:26.204080  108596 get.go:251] Starting watch for /api/v1/services, rv=59929 labels= fields= timeout=7m53s
I0920 03:04:26.204109  108596 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59815 labels= fields= timeout=7m54s
I0920 03:04:26.204123  108596 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59815 labels= fields= timeout=5m34s
I0920 03:04:26.204115  108596 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59815 labels= fields= timeout=8m36s
I0920 03:04:26.204248  108596 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59815 labels= fields= timeout=9m8s
I0920 03:04:26.204276  108596 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59815 labels= fields= timeout=5m23s
I0920 03:04:26.204375  108596 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59815 labels= fields= timeout=6m22s
I0920 03:04:26.204375  108596 get.go:251] Starting watch for /api/v1/nodes, rv=59815 labels= fields= timeout=5m46s
I0920 03:04:26.204943  108596 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (298.367µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35448]
I0920 03:04:26.205516  108596 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59815 labels= fields= timeout=6m36s
I0920 03:04:26.277933  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.301665  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301700  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301706  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301711  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301715  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301719  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301723  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301728  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301731  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301737  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301744  108596 shared_informer.go:227] caches populated
I0920 03:04:26.301807  108596 node_lifecycle_controller.go:327] Sending events to api server.
I0920 03:04:26.301861  108596 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0920 03:04:26.301877  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 03:04:26.301950  108596 taint_manager.go:162] Sending events to api server.
I0920 03:04:26.302009  108596 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 03:04:26.302027  108596 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0920 03:04:26.302039  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 03:04:26.302062  108596 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 03:04:26.302198  108596 node_lifecycle_controller.go:488] Starting node controller
I0920 03:04:26.302233  108596 shared_informer.go:197] Waiting for caches to sync for taint
I0920 03:04:26.304369  108596 httplog.go:90] POST /api/v1/namespaces: (1.72927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35450]
I0920 03:04:26.304560  108596 node_lifecycle_controller.go:327] Sending events to api server.
I0920 03:04:26.304616  108596 node_lifecycle_controller.go:359] Controller is using taint based evictions.
I0920 03:04:26.304692  108596 taint_manager.go:162] Sending events to api server.
I0920 03:04:26.304740  108596 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 03:04:26.304755  108596 node_lifecycle_controller.go:465] Controller will taint node by condition.
I0920 03:04:26.304779  108596 node_lifecycle_controller.go:488] Starting node controller
I0920 03:04:26.304792  108596 shared_informer.go:197] Waiting for caches to sync for taint
I0920 03:04:26.304934  108596 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.304943  108596 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.305910  108596 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (597.49µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35450]
I0920 03:04:26.306758  108596 get.go:251] Starting watch for /api/v1/namespaces, rv=59931 labels= fields= timeout=5m56s
I0920 03:04:26.353693  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.404928  108596 shared_informer.go:227] caches populated
I0920 03:04:26.404986  108596 shared_informer.go:227] caches populated
I0920 03:04:26.404994  108596 shared_informer.go:227] caches populated
I0920 03:04:26.405001  108596 shared_informer.go:227] caches populated
I0920 03:04:26.405010  108596 shared_informer.go:227] caches populated
I0920 03:04:26.405016  108596 shared_informer.go:227] caches populated
I0920 03:04:26.405274  108596 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.405301  108596 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.405297  108596 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.405348  108596 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.405274  108596 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.405371  108596 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0920 03:04:26.406357  108596 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (493.181µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35452]
I0920 03:04:26.406378  108596 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (452.386µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35456]
I0920 03:04:26.406378  108596 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (532.902µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 03:04:26.406991  108596 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59815 labels= fields= timeout=9m16s
I0920 03:04:26.407009  108596 get.go:251] Starting watch for /api/v1/pods, rv=59815 labels= fields= timeout=6m23s
I0920 03:04:26.407465  108596 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59815 labels= fields= timeout=9m1s
I0920 03:04:26.481388  108596 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0920 03:04:26.481430  108596 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0920 03:04:26.481459  108596 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0920 03:04:26.481463  108596 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0920 03:04:26.481472  108596 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0920 03:04:26.481476  108596 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0920 03:04:26.481614  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"5e048cfe-612b-4c57-9d1d-7255a95734d4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0920 03:04:26.481655  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"8d4c3851-88bb-4716-b8cb-c6737511dbd0", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0920 03:04:26.481665  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"aadc1fd4-6442-46f8-b587-f84b8dbbaee3", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0920 03:04:26.483986  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (2.04761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:26.486020  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.538536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:26.487429  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.056296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:26.488592  108596 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0920 03:04:26.488692  108596 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0920 03:04:26.488734  108596 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0920 03:04:26.488773  108596 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0920 03:04:26.488807  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"8d4c3851-88bb-4716-b8cb-c6737511dbd0", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0920 03:04:26.488934  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"5e048cfe-612b-4c57-9d1d-7255a95734d4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0920 03:04:26.488831  108596 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0920 03:04:26.489029  108596 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0920 03:04:26.489103  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"aadc1fd4-6442-46f8-b587-f84b8dbbaee3", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0920 03:04:26.490400  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.092706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:26.492136  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.242411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:26.493809  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.14023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:26.502417  108596 shared_informer.go:227] caches populated
I0920 03:04:26.502439  108596 shared_informer.go:204] Caches are synced for taint 
I0920 03:04:26.502479  108596 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 03:04:26.504947  108596 shared_informer.go:227] caches populated
I0920 03:04:26.504968  108596 shared_informer.go:204] Caches are synced for taint 
I0920 03:04:26.505016  108596 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 03:04:26.505184  108596 shared_informer.go:227] caches populated
I0920 03:04:26.505223  108596 shared_informer.go:227] caches populated
I0920 03:04:26.505230  108596 shared_informer.go:227] caches populated
I0920 03:04:26.505234  108596 shared_informer.go:227] caches populated
I0920 03:04:26.505242  108596 shared_informer.go:227] caches populated
I0920 03:04:26.505246  108596 shared_informer.go:227] caches populated
I0920 03:04:26.505249  108596 shared_informer.go:227] caches populated
I0920 03:04:26.505253  108596 shared_informer.go:227] caches populated
I0920 03:04:26.507622  108596 httplog.go:90] POST /api/v1/nodes: (1.807539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.508052  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 03:04:26.508073  108596 taint_manager.go:438] Updating known taints on node node-0: []
I0920 03:04:26.508097  108596 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0920 03:04:26.508127  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 03:04:26.508142  108596 taint_manager.go:438] Updating known taints on node node-0: []
I0920 03:04:26.509342  108596 httplog.go:90] POST /api/v1/nodes: (1.324083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.509561  108596 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0920 03:04:26.509598  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 03:04:26.509606  108596 taint_manager.go:438] Updating known taints on node node-1: []
I0920 03:04:26.509634  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 03:04:26.509646  108596 taint_manager.go:438] Updating known taints on node node-1: []
I0920 03:04:26.511116  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:26.511144  108596 taint_manager.go:438] Updating known taints on node node-2: []
I0920 03:04:26.511145  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:26.511159  108596 taint_manager.go:438] Updating known taints on node node-2: []
I0920 03:04:26.511176  108596 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0920 03:04:26.511382  108596 httplog.go:90] POST /api/v1/nodes: (1.7046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.513144  108596 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/pods: (1.380023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.513518  108596 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:26.513539  108596 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:26.513542  108596 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373", Name:"testpod-2"}
I0920 03:04:26.513587  108596 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373", Name:"testpod-2"}
I0920 03:04:26.513803  108596 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2", node "node-2"
I0920 03:04:26.513823  108596 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2", node "node-2": all PVCs bound and nothing to do
I0920 03:04:26.513866  108596 factory.go:606] Attempting to bind testpod-2 to node-2
I0920 03:04:26.515539  108596 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/pods/testpod-2/binding: (1.470281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.515729  108596 scheduler.go:662] pod taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 is bound successfully on node "node-2", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0920 03:04:26.515788  108596 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373", Name:"testpod-2"}
I0920 03:04:26.515795  108596 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373", Name:"testpod-2"}
I0920 03:04:26.517423  108596 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/events: (1.448913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.518512  108596 httplog.go:90] GET /api/v1/namespaces/kube-system: (954.497µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0920 03:04:26.519805  108596 httplog.go:90] GET /api/v1/namespaces/kube-public: (905.394µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0920 03:04:26.520952  108596 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (859.999µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0920 03:04:26.615504  108596 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/pods/testpod-2: (1.565689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.617158  108596 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/pods/testpod-2: (1.139369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.618599  108596 httplog.go:90] GET /api/v1/nodes/node-2: (975.601µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.620834  108596 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.696737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.621842  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (401.543µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.622135  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (413.708µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:26.624422  108596 store.go:362] GuaranteedUpdate of /c30d22f9-3e39-4b9d-8eed-182b350fd9ea/minions/node-2 failed because of a conflict, going to retry
I0920 03:04:26.624659  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.973778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:26.624929  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:26.621249798 +0000 UTC m=+359.218755708,}] Taint to Node node-2
I0920 03:04:26.625086  108596 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 03:04:26.625116  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.323007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.625438  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:26.621289644 +0000 UTC m=+359.218795625,}] Taint to Node node-2
I0920 03:04:26.625464  108596 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 03:04:26.723131  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.538145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.823196  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.543613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.923232  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.625583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:26.924028  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.924031  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.925677  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.926019  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.926456  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.928138  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:26.928986  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.023134  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.559408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.068262  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.068520  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.070196  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.070342  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.071834  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.071843  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.123083  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.499165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.148723  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.148754  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.148783  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.149020  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.149168  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.149866  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.203723  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.203796  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.204005  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.204158  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.204238  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.205418  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.223438  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.764454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.278101  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.323251  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.608605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.353849  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.406988  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.423258  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.614274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.523298  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.678733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.623198  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.537458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.723035  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.451483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.823252  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.629208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.922864  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.188144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:27.924209  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.924210  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.925881  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.926158  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.926614  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.928401  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:27.929150  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.023386  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.664955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.068438  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.068692  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.070388  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.070550  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.071990  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.071994  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.123281  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.634918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.148866  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.148884  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.148907  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.149191  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.149345  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.149976  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.203854  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.203932  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.204117  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.204349  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.204399  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.205562  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.223064  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.477841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.278269  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.323127  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.571256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.354027  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.407393  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.423145  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.485927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.523304  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.673279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.623191  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.530666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.723353  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.600282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.823162  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.520662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.923111  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.468511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:28.924515  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.924638  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.926037  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.926381  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.926783  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.928548  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:28.929269  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.023246  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.640653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.068636  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.068898  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.070571  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.070699  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.072157  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.072160  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.123034  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.399399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.149035  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.149035  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.149049  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.149543  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.149576  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.150197  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.203985  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.204054  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.204395  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.204475  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.204485  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.205713  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.223143  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.571263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.278421  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.323042  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.474975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.354222  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.407577  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.422935  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.36405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.523223  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.609682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.623053  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.462297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.723070  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.48083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.807491  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.52273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0920 03:04:29.809267  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.293308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0920 03:04:29.810765  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.049342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0920 03:04:29.822881  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.303342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.923040  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.421339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:29.924688  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.924859  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.926271  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.926635  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.926915  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.928719  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:29.929365  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.023535  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.798569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.068814  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.069084  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.070712  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.070835  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.072397  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.072399  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.123427  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.809034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.149359  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.149360  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.149449  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.149788  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.149865  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.150401  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.204206  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.204219  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.204520  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.204611  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.204752  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.205856  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.222980  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.380654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.278612  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.322963  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.384692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.354399  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.407785  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.422952  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.31388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.522973  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.357494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.623054  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.403173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.722931  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.371228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.822908  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.320652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.923115  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.543281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:30.924884  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.925012  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.926419  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.926890  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.927134  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.928870  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:30.929413  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
E0920 03:04:30.937182  108596 factory.go:590] Error getting pod permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/signalling-pod for retry: Get http://127.0.0.1:36219/api/v1/namespaces/permit-plugin81c24e8a-bde4-43b1-95cb-3952fb3e4cc1/pods/signalling-pod: dial tcp 127.0.0.1:36219: connect: connection refused; retrying...
I0920 03:04:30.947477  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.336497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:30.949186  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.207244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:30.950650  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (981.249µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:31.023206  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.57698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.068988  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.069306  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.070877  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.070972  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.072594  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.072585  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.123177  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.561359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.149555  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.149555  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.149564  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.149959  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.150008  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.150541  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.204436  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.204476  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.204755  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.204811  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.204871  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.206042  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.222919  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.31595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.278998  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.322939  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.344592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.354550  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.407976  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.423045  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.461556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.502622  108596 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 03:04:31.502665  108596 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 03:04:31.502724  108596 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 03:04:31.502737  108596 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 03:04:31.502742  108596 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 03:04:31.502749  108596 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 03:04:31.502754  108596 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0920 03:04:31.502784  108596 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 03:04:31.502818  108596 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0920 03:04:31.502838  108596 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 03:04:31.502936  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"e13974ab-9402-4f8b-93d8-310bc53df175", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 03:04:31.502973  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"1a355152-ae7c-4a3d-9b05-5231eb62899a", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 03:04:31.502983  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"12d1a23f-38cb-4cc1-ba49-ce0930f2f219", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 03:04:31.502947  108596 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 03:04:31.50292615 +0000 UTC m=+364.100432063. Adding it to the Taint queue.
I0920 03:04:31.503070  108596 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 03:04:31.505047  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.784383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.505183  108596 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 03:04:31.505262  108596 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 03:04:31.505434  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"e13974ab-9402-4f8b-93d8-310bc53df175", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 03:04:31.505351  108596 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 03:04:31.505547  108596 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 03:04:31.505558  108596 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 03:04:31.505576  108596 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 03:04:31.505582  108596 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
I0920 03:04:31.505592  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"1a355152-ae7c-4a3d-9b05-5231eb62899a", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
W0920 03:04:31.505619  108596 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 03:04:31.505657  108596 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0920 03:04:31.505689  108596 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 03:04:31.505716  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"12d1a23f-38cb-4cc1-ba49-ce0930f2f219", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 03:04:31.505729  108596 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 03:04:31.505715492 +0000 UTC m=+364.103221398. Adding it to the Taint queue.
I0920 03:04:31.505754  108596 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 03:04:31.507199  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.558135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.507477  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.547466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:31.508741  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.152126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.509155  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.381282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:31.509724  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (745.202µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.510841  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.239023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:31.512001  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (289.609µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:31.513189  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.697729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.513476  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 03:04:31.508778508 +0000 UTC m=+364.106284425,}] Taint to Node node-2
I0920 03:04:31.513511  108596 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 03:04:31.513587  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:31.513610  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:31.513605  108596 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 03:04:31 +0000 UTC}]
I0920 03:04:31.513651  108596 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:31.513644503 +0000 UTC m=+364.111150415 to be fired at 2019-09-20 03:04:31.513644503 +0000 UTC m=+364.111150415
I0920 03:04:31.513638  108596 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 03:04:31 +0000 UTC}]
I0920 03:04:31.513675  108596 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:31.513693  108596 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:31.513684029 +0000 UTC m=+364.111189941 to be fired at 2019-09-20 03:04:31.513684029 +0000 UTC m=+364.111189941
I0920 03:04:31.513735  108596 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:31.513850  108596 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:31.513979  108596 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:31.514800  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.98863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:31.515064  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 03:04:31.511535336 +0000 UTC m=+364.109041225,}] Taint to Node node-2
I0920 03:04:31.515093  108596 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 03:04:31.515469  108596 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/events: (933.651µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:31.515641  108596 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/events: (1.041088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 03:04:31.516014  108596 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/pods/testpod-2: (1.852357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35466]
I0920 03:04:31.516304  108596 store.go:362] GuaranteedUpdate of /c30d22f9-3e39-4b9d-8eed-182b350fd9ea/pods/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 failed because of a conflict, going to retry
I0920 03:04:31.516659  108596 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/pods/testpod-2: (2.40082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35472]
I0920 03:04:31.522296  108596 httplog.go:90] GET /api/v1/nodes/node-2: (814.324µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:31.623176  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.611523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:31.722987  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.434779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:31.822882  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.249363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:31.922823  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.297167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:31.925080  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.925199  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.926559  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.927094  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.927275  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.929018  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:31.929542  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.023143  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.547171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.069365  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.069516  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.071025  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.071221  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.072872  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.072873  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.123196  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.568803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.149750  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.149768  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.149755  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.150110  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.150156  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.150681  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.204804  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.204895  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.204814  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.204951  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.204987  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.206211  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.222903  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.389912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.279252  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.323368  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.731255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.354732  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.408163  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.423056  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.490299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.523255  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.661342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.623028  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.447921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.723284  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.700024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.822865  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.269162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.923403  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.772322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:32.925220  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.925362  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.926722  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.927309  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.927445  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.929214  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:32.929681  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.023259  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.629501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.069518  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.069663  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.071281  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.071524  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.073006  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.073030  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.123245  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.616993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.149930  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.149946  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.149950  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.150272  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.150282  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.150849  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.205049  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.205089  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.205096  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.205054  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.205058  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.206367  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.223177  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.586673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.279448  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.323253  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.595288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.354915  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.408352  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.422965  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.385041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.523269  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.623741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.623388  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.672656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.723090  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.5082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.823250  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.654911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.922998  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.412566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:33.925391  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.925473  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.926898  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.927622  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.927631  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.929395  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:33.929853  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.022921  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.353836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.069721  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.069812  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.071440  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.071646  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.073172  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.073177  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.123148  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.551103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.150295  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.150295  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.150412  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.150432  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.150552  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.150981  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.205231  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.205245  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.205288  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.205245  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.205589  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.206531  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.223076  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.48973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.279645  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.323055  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.424576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.355115  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.408556  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.423364  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.728609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.523255  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.545618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.623254  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.680791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.723105  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.516311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.822999  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.42358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.923056  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.413927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:34.925549  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.925618  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.927120  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.927777  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.927786  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.929560  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:34.930067  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.023003  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.441342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.070139  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.070142  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.071627  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.071785  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.073377  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.073499  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.123106  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.444188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.150575  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.150598  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.150575  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.150598  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.150746  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.151132  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.205399  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.205476  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.205479  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.205503  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.205758  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.206675  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.223385  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.712083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.279796  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.326556  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.661651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.355303  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.408741  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.423484  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.787002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.523625  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.896782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.623169  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.571983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.722981  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.407475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.823260  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.607239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.923529  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.815188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:35.925718  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.925772  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.927262  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.927849  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.927925  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.929702  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.930214  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:35.957434  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.399992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0920 03:04:35.959407  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.373595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0920 03:04:35.960883  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.019261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34136]
I0920 03:04:36.009105  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.311336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.010603  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.059086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.011981  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (962.559µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.022770  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.202334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.070342  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.070365  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.071794  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.071990  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.073550  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.073658  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.123072  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.447539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.150794  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.150854  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.150816  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.150885  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.150837  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.151293  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.205614  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.205641  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.205631  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.205619  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.205876  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.206826  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.223468  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.815487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.279970  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.322938  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.351578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.355500  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.408942  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.423150  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.550892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.503324  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000509766s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 03:04:36.503381  108596 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0920 03:04:36.503393  108596 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0920 03:04:36.503402  108596 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0920 03:04:36.505945  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000306384s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 03:04:36.506016  108596 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0920 03:04:36.506029  108596 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0920 03:04:36.506039  108596 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0920 03:04:36.506479  108596 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.566302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.506939  108596 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0920 03:04:36.507060  108596 controller_utils.go:124] Update ready status of pods on node [node-0]
I0920 03:04:36.507187  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"e13974ab-9402-4f8b-93d8-310bc53df175", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0920 03:04:36.507727  108596 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (820.147µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.508137  108596 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (906.393µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35484]
I0920 03:04:36.508233  108596 httplog.go:90] PUT /api/v1/nodes/node-0/status: (1.791384ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
E0920 03:04:36.508728  108596 node_lifecycle_controller.go:1037] Error updating node node-0: Operation cannot be fulfilled on nodes "node-0": the object has been modified; please apply your changes to the latest version and try again
I0920 03:04:36.509515  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (1.775734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35486]
I0920 03:04:36.509634  108596 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (1.929498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35488]
I0920 03:04:36.509837  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.007008329s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 03:04:36.509878  108596 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0920 03:04:36.509888  108596 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0920 03:04:36.509893  108596 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0920 03:04:36.509913  108596 httplog.go:90] GET /api/v1/nodes/node-0: (937.798µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:36.511509  108596 store.go:362] GuaranteedUpdate of /c30d22f9-3e39-4b9d-8eed-182b350fd9ea/minions/node-0 failed because of a conflict, going to retry
I0920 03:04:36.511588  108596 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.448338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35488]
I0920 03:04:36.511699  108596 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.037917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35474]
I0920 03:04:36.511895  108596 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0920 03:04:36.511968  108596 controller_utils.go:124] Update ready status of pods on node [node-1]
I0920 03:04:36.512042  108596 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"1a355152-ae7c-4a3d-9b05-5231eb62899a", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0920 03:04:36.512268  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:36.506716334 +0000 UTC m=+369.104222223,}] Taint to Node node-0
I0920 03:04:36.512340  108596 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 03:04:36.512422  108596 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.306277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35484]
I0920 03:04:36.512683  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:36.506775009 +0000 UTC m=+369.104280973,}] Taint to Node node-0
I0920 03:04:36.512730  108596 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 03:04:36.513033  108596 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (361.22µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35484]
I0920 03:04:36.513108  108596 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (367.249µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35490]
I0920 03:04:36.513537  108596 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.043345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:36.513802  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.01087873s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 03:04:36.513866  108596 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 03:04:36.513880  108596 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 03:04:36.513888  108596 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 03:04:36.515635  108596 store.go:362] GuaranteedUpdate of /c30d22f9-3e39-4b9d-8eed-182b350fd9ea/minions/node-1 failed because of a conflict, going to retry
I0920 03:04:36.516147  108596 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.029322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35492]
I0920 03:04:36.516161  108596 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.392099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35484]
I0920 03:04:36.516464  108596 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 03:04:36.516461  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:36.512411587 +0000 UTC m=+369.109917475,}] Taint to Node node-1
I0920 03:04:36.516489  108596 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 03:04:36.516729  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (302.339µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35484]
I0920 03:04:36.516822  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (333.949µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35492]
I0920 03:04:36.517070  108596 httplog.go:90] POST /api/v1/namespaces/default/events: (4.236554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35486]
I0920 03:04:36.517080  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (281.708µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35494]
I0920 03:04:36.517530  108596 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.721819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:36.517783  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:36.512383934 +0000 UTC m=+369.109889821,}] Taint to Node node-1
I0920 03:04:36.517815  108596 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 03:04:36.519119  108596 store.go:362] GuaranteedUpdate of /c30d22f9-3e39-4b9d-8eed-182b350fd9ea/minions/node-2 failed because of a conflict, going to retry
I0920 03:04:36.519589  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.034215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35494]
I0920 03:04:36.519863  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:36.516300674 +0000 UTC m=+369.113806581,}] Taint to Node node-2
I0920 03:04:36.520041  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.484737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35492]
I0920 03:04:36.520287  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:36.516214613 +0000 UTC m=+369.113720560,}] Taint to Node node-2
I0920 03:04:36.520474  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (434.524µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35494]
I0920 03:04:36.520543  108596 store.go:362] GuaranteedUpdate of /c30d22f9-3e39-4b9d-8eed-182b350fd9ea/minions/node-2 failed because of a conflict, going to retry
I0920 03:04:36.520916  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (400.893µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35492]
I0920 03:04:36.522260  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (4.143217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:36.522847  108596 store.go:362] GuaranteedUpdate of /c30d22f9-3e39-4b9d-8eed-182b350fd9ea/minions/node-2 failed because of a conflict, going to retry
I0920 03:04:36.522941  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:36.522961  108596 taint_manager.go:438] Updating known taints on node node-2: []
I0920 03:04:36.522979  108596 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 03:04:36.522997  108596 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:36.522992978 +0000 UTC m=+369.120498889
I0920 03:04:36.523055  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:36.523070  108596 taint_manager.go:438] Updating known taints on node node-2: []
I0920 03:04:36.523082  108596 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 03:04:36.523090  108596 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:36.523087317 +0000 UTC m=+369.120593226
I0920 03:04:36.523741  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.79655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.523850  108596 store.go:362] GuaranteedUpdate of /c30d22f9-3e39-4b9d-8eed-182b350fd9ea/minions/node-2 failed because of a conflict, going to retry
I0920 03:04:36.523961  108596 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:26 +0000 UTC,}] Taint
I0920 03:04:36.523986  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:36.524005  108596 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 03:04:31 +0000 UTC}]
I0920 03:04:36.524028  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:36.524037  108596 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:36.524026096 +0000 UTC m=+369.121532018 to be fired at 2019-09-20 03:04:36.524026096 +0000 UTC m=+369.121532018
I0920 03:04:36.524040  108596 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 03:04:31 +0000 UTC}]
I0920 03:04:36.524060  108596 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:36.524054512 +0000 UTC m=+369.121560426 to be fired at 2019-09-20 03:04:36.524054512 +0000 UTC m=+369.121560426
I0920 03:04:36.524071  108596 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:36.524083  108596 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:36.524194  108596 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:36.524289  108596 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2
I0920 03:04:36.525576  108596 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/pods/testpod-2: (1.269316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35468]
I0920 03:04:36.525588  108596 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/pods/testpod-2: (1.295612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.526305  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (5.14243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35494]
I0920 03:04:36.526434  108596 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/events/testpod-2.15c606810c65c942: (1.711584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35500]
I0920 03:04:36.526444  108596 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/events/testpod-2.15c606810c64e1d7: (1.690109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35498]
I0920 03:04:36.526645  108596 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 03:04:26 +0000 UTC,}] Taint
I0920 03:04:36.526689  108596 httplog.go:90] GET /api/v1/nodes/node-2: (4.809798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35484]
I0920 03:04:36.530361  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.024724882s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 03:04:36.530401  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.024771677s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:36.530413  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.024784612s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:36.530428  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.024797248s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:36.530470  108596 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 03:04:36.530453201 +0000 UTC m=+369.127959115. Adding it to the Taint queue.
I0920 03:04:36.530502  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.02483774s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 03:04:36.530530  108596 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0920 03:04:36.530541  108596 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0920 03:04:36.530550  108596 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0920 03:04:36.532034  108596 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.24129ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
E0920 03:04:36.532183  108596 node_lifecycle_controller.go:1037] Error updating node node-1: Operation cannot be fulfilled on nodes "node-1": the object has been modified; please apply your changes to the latest version and try again
I0920 03:04:36.533204  108596 httplog.go:90] GET /api/v1/nodes/node-1: (852.69µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.553686  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.048012572s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 03:04:36.553742  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.048078425s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:36.553757  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.048093626s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:36.553768  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.048104913s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:36.553821  108596 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 03:04:36.55380616 +0000 UTC m=+369.151312063. Adding it to the Taint queue.
I0920 03:04:36.553851  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.048138907s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 03:04:36.553873  108596 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 03:04:36.553880  108596 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 03:04:36.553887  108596 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 03:04:36.556176  108596 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.970362ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
E0920 03:04:36.556435  108596 node_lifecycle_controller.go:1037] Error updating node node-2: Operation cannot be fulfilled on nodes "node-2": the object has been modified; please apply your changes to the latest version and try again
I0920 03:04:36.557752  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.106018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.578359  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.072635115s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 03:04:36.578428  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.072707633s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:36.578453  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.072739757s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:36.578468  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.072754678s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:36.579301  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (537.045µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.582934  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.707597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.583180  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:36.583206  108596 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 03:04:31 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 03:04:36 +0000 UTC}]
I0920 03:04:36.583225  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:36.583242  108596 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 03:04:31 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 03:04:36 +0000 UTC}]
I0920 03:04:36.583235  108596 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-20 03:04:36.57852635 +0000 UTC m=+369.176032257,}] Taint to Node node-2
I0920 03:04:36.583251  108596 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:36.583234729 +0000 UTC m=+369.180740642 to be fired at 2019-09-20 03:04:36.583234729 +0000 UTC m=+369.180740642
I0920 03:04:36.583271  108596 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:36.583261911 +0000 UTC m=+369.180767825 to be fired at 2019-09-20 03:04:36.583261911 +0000 UTC m=+369.180767825
W0920 03:04:36.583272  108596 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2}. Skipping.
W0920 03:04:36.583282  108596 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2}. Skipping.
I0920 03:04:36.583975  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (503.764µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.586859  108596 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.112614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.587126  108596 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 03:04:36.587218  108596 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 03:04:36.587382  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:36.587386  108596 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 03:04:36.587400  108596 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 03:04:36 +0000 UTC}]
I0920 03:04:36.587405  108596 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 03:04:36 +0000 UTC}]
I0920 03:04:36.587432  108596 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:36.587422786 +0000 UTC m=+369.184928703 to be fired at 2019-09-20 03:09:36.587422786 +0000 UTC m=+669.184928703
I0920 03:04:36.587433  108596 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2 at 2019-09-20 03:04:36.587421918 +0000 UTC m=+369.184927832 to be fired at 2019-09-20 03:09:36.587421918 +0000 UTC m=+669.184927832
W0920 03:04:36.587451  108596 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2}. Skipping.
W0920 03:04:36.587447  108596 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsa67e3eef-c2d6-4fc2-8a2f-b77fc5f49373/testpod-2}. Skipping.
I0920 03:04:36.587719  108596 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (329.291µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.623458  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.817079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.723126  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.510862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.823304  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.687754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.923350  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.690866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:36.925888  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.925916  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.927402  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.927981  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.928057  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.929852  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:36.930384  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.023228  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.596233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.070530  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.070531  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.071989  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.072199  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.073718  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.073819  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.123202  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.662875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.151037  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.151064  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.151036  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.151052  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.151044  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.151445  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.205802  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.205801  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.205887  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.205802  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.206009  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.206989  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.223167  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.574947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.280155  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.323116  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.543864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.355610  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.409126  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.423219  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.591395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.523306  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.701753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.623180  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.546435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.723188  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.653859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.823428  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.7549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.923562  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.917075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:37.926056  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.926079  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.927668  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.928209  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.928236  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.930025  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:37.930565  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.023295  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.675469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.070677  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.070781  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.072354  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.072411  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.073887  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.073992  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.123559  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.88013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.151253  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.151279  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.151253  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.151285  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.151335  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.151615  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.205945  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.205969  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.206158  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.206364  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.206366  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.207174  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.223358  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.680887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.280428  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.323437  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.734032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.355878  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.409435  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.423609  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.930649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.523394  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.769418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.623478  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.807794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.723549  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.871787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.823480  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.81641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.923592  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.895453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:38.926374  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.926493  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.927805  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.928400  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.928423  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.930351  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:38.930799  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.023284  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.665179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.070915  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.070918  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.072644  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.072750  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.074046  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.074194  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.123461  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.744819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.151428  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.151446  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.151465  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.151429  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.151447  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.151780  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.206132  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.206139  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.206606  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.206265  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.206660  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.207389  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.223170  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.277363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.280651  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.323551  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.863136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.356114  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.409673  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.423443  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.672482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.523407  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.738995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.623371  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.699327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.723409  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.739513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.807471  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.370742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0920 03:04:39.809260  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.142915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0920 03:04:39.810762  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.089029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51456]
I0920 03:04:39.823033  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.395368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.923491  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.807672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:39.926579  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.926630  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.928009  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.928605  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.928611  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.930568  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:39.930953  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.023480  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.797518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.071112  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.071149  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.072872  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.072882  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.074269  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.074378  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.123378  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.706109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.151647  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.151652  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.151661  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.151663  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.151909  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.151988  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.206626  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.206758  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.206926  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.206959  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.207073  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.207589  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.223077  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.490492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.280851  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.323273  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.59364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.356285  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.409896  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.423412  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.794028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.523346  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.667698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.623234  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.582617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.723304  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.654452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.823534  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.792741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.924272  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.686973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:40.926793  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.926805  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.928198  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.928806  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.928831  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.930694  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.931107  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:40.947833  108596 httplog.go:90] GET /api/v1/namespaces/default: (1.483993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:40.949434  108596 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.123026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:40.950839  108596 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (993.863µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0920 03:04:41.023434  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.721972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.071402  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.071580  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.073098  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.073099  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.074624  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.074635  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.123342  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.685198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.151879  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.151914  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.151900  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.152167  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.152207  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.152218  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.206830  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.206905  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.207180  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.207333  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.207415  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.207714  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.223128  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.543749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.281228  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.323253  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.621869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.356524  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.410109  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.423100  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.485831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.522786  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.019985668s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 03:04:41.522847  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.020053892s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.522866  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.020075753s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.522884  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.020092737s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.522939  108596 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 03:04:41.522920909 +0000 UTC m=+374.120426827. Adding it to the Taint queue.
I0920 03:04:41.522975  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020150915s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 03:04:41.522997  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020173665s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.523013  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020189606s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.523027  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.02020366s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.523058  108596 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 03:04:41.523048055 +0000 UTC m=+374.120553965. Adding it to the Taint queue.
I0920 03:04:41.523088  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.02016828s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 03:04:41.523114  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.02019546s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.523141  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.020222425s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.523171  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.020251417s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.523204  108596 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 03:04:41.523193454 +0000 UTC m=+374.120699366. Adding it to the Taint queue.
I0920 03:04:41.523415  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.763713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.588307  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.082662837s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 03:04:41.588633  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.082993413s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.588784  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.083151312s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.588886  108596 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.083255256s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.589017  108596 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 03:04:41.588991211 +0000 UTC m=+374.186497132. Adding it to the Taint queue.
I0920 03:04:41.589179  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.08351168s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 03:04:41.589291  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.083622812s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.589425  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.083758577s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.589513  108596 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.083846389s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.589703  108596 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 03:04:41.589680798 +0000 UTC m=+374.187186711. Adding it to the Taint queue.
I0920 03:04:41.589876  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.084160523s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 03:04:41.589992  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.084276205s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.590096  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.084380781s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.590256  108596 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.084540807s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 03:04:26 +0000 UTC,LastTransitionTime:2019-09-20 03:04:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 03:04:41.590416  108596 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 03:04:41.590398151 +0000 UTC m=+374.187904222. Adding it to the Taint queue.
I0920 03:04:41.623164  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.571446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.723053  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.385785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.823212  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.548574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.923628  108596 httplog.go:90] GET /api/v1/nodes/node-2: (1.962294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35496]
I0920 03:04:41.926979  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.927002  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.928371  108596 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 03:04:41.929024