This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 2 failed / 2609 succeeded
Started2020-01-14 21:56
Elapsed25m53s
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ec5e5cfe-7afc-46f6-9d4f-969d77ee57e4/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/ec5e5cfe-7afc-46f6-9d4f-969d77ee57e4/targets/test

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0114 22:20:50.993053  109973 feature_gate.go:243] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (64.84s)

				from junit_20200114-221042.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations
W0114 22:21:00.857512  109973 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0114 22:21:00.857538  109973 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0114 22:21:00.857552  109973 master.go:308] Node port range unspecified. Defaulting to 30000-32767.
I0114 22:21:00.857566  109973 master.go:264] Using reconciler: 
I0114 22:21:00.858921  109973 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.859120  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.859200  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.860115  109973 store.go:1350] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0114 22:21:00.860150  109973 reflector.go:188] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0114 22:21:00.860324  109973 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.860576  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.860602  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.861152  109973 store.go:1350] Monitoring events count at <storage-prefix>//events
I0114 22:21:00.861204  109973 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I0114 22:21:00.861202  109973 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.861355  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.861318  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.861417  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.861993  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.862230  109973 store.go:1350] Monitoring limitranges count at <storage-prefix>//limitranges
I0114 22:21:00.862302  109973 reflector.go:188] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0114 22:21:00.862402  109973 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.862571  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.862601  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.863049  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.863157  109973 store.go:1350] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0114 22:21:00.863233  109973 reflector.go:188] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0114 22:21:00.863308  109973 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.863964  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.864883  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.864914  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.865549  109973 store.go:1350] Monitoring secrets count at <storage-prefix>//secrets
I0114 22:21:00.865612  109973 reflector.go:188] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0114 22:21:00.865697  109973 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.865847  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.865876  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.866566  109973 store.go:1350] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0114 22:21:00.866593  109973 reflector.go:188] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0114 22:21:00.866765  109973 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.866901  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.866929  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.867615  109973 store.go:1350] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0114 22:21:00.867647  109973 reflector.go:188] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0114 22:21:00.867759  109973 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.867909  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.867929  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.868672  109973 store.go:1350] Monitoring configmaps count at <storage-prefix>//configmaps
I0114 22:21:00.868788  109973 reflector.go:188] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0114 22:21:00.868977  109973 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.869115  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.869155  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.869196  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.869278  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.869331  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.869518  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.870249  109973 store.go:1350] Monitoring namespaces count at <storage-prefix>//namespaces
I0114 22:21:00.870336  109973 reflector.go:188] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0114 22:21:00.870411  109973 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.870653  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.870684  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.871073  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.871275  109973 store.go:1350] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0114 22:21:00.871316  109973 reflector.go:188] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0114 22:21:00.871414  109973 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.871544  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.871568  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.872126  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.872427  109973 store.go:1350] Monitoring nodes count at <storage-prefix>//minions
I0114 22:21:00.872507  109973 reflector.go:188] Listing and watching *core.Node from storage/cacher.go:/minions
I0114 22:21:00.872630  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.872756  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.872780  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.873314  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.873429  109973 store.go:1350] Monitoring pods count at <storage-prefix>//pods
I0114 22:21:00.873494  109973 reflector.go:188] Listing and watching *core.Pod from storage/cacher.go:/pods
I0114 22:21:00.873613  109973 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.873756  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.873785  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.874581  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.874625  109973 store.go:1350] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0114 22:21:00.874751  109973 reflector.go:188] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0114 22:21:00.874841  109973 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.874977  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.875003  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.875450  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.875627  109973 store.go:1350] Monitoring services count at <storage-prefix>//services/specs
I0114 22:21:00.875676  109973 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.875688  109973 reflector.go:188] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0114 22:21:00.875822  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.875844  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.876664  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.878082  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.878107  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.878805  109973 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.878929  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.878953  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.879469  109973 store.go:1350] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0114 22:21:00.879490  109973 rest.go:113] the default service ipfamily for this cluster is: IPv4
I0114 22:21:00.879571  109973 reflector.go:188] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0114 22:21:00.879993  109973 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.880610  109973 watch_cache.go:409] Replace watchCache (rev: 57236) 
I0114 22:21:00.880610  109973 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.881478  109973 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.882131  109973 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.882598  109973 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.883111  109973 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.883450  109973 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.883585  109973 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.883782  109973 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.884284  109973 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.884723  109973 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.884885  109973 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.885488  109973 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.885690  109973 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.886276  109973 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.886476  109973 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.886954  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.887112  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.887262  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.887395  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.887591  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.887731  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.887909  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.888503  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.888744  109973 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.889408  109973 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.889879  109973 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.890099  109973 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.890329  109973 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.890924  109973 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.891196  109973 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.891770  109973 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.892298  109973 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.892798  109973 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.893330  109973 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.893517  109973 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.893611  109973 master.go:488] Skipping disabled API group "auditregistration.k8s.io".
I0114 22:21:00.893627  109973 master.go:499] Enabling API group "authentication.k8s.io".
I0114 22:21:00.893636  109973 master.go:499] Enabling API group "authorization.k8s.io".
I0114 22:21:00.893730  109973 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.893848  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.893866  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.894511  109973 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0114 22:21:00.894560  109973 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0114 22:21:00.894670  109973 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.894795  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.894820  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.895343  109973 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0114 22:21:00.895387  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.895401  109973 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0114 22:21:00.895511  109973 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.895648  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.895675  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.896346  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.896381  109973 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0114 22:21:00.896361  109973 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0114 22:21:00.896504  109973 master.go:499] Enabling API group "autoscaling".
I0114 22:21:00.896671  109973 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.896891  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.896963  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.897211  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.897536  109973 store.go:1350] Monitoring jobs.batch count at <storage-prefix>//jobs
I0114 22:21:00.897576  109973 reflector.go:188] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0114 22:21:00.897783  109973 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.897883  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.897903  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.898540  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.898706  109973 store.go:1350] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0114 22:21:00.898731  109973 master.go:499] Enabling API group "batch".
I0114 22:21:00.898735  109973 reflector.go:188] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0114 22:21:00.898875  109973 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.899054  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.899086  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.899545  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.899687  109973 store.go:1350] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0114 22:21:00.899712  109973 master.go:499] Enabling API group "certificates.k8s.io".
I0114 22:21:00.899784  109973 reflector.go:188] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0114 22:21:00.899850  109973 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.899982  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.900006  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.900633  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.902041  109973 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0114 22:21:00.902115  109973 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0114 22:21:00.902218  109973 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.902442  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.902477  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.902992  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.903223  109973 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0114 22:21:00.903381  109973 master.go:499] Enabling API group "coordination.k8s.io".
I0114 22:21:00.903266  109973 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0114 22:21:00.903562  109973 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.903705  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.903726  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.904426  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.904488  109973 store.go:1350] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I0114 22:21:00.904504  109973 master.go:499] Enabling API group "discovery.k8s.io".
I0114 22:21:00.904583  109973 reflector.go:188] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I0114 22:21:00.904672  109973 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.904828  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.904863  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.905903  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.906245  109973 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0114 22:21:00.906268  109973 master.go:499] Enabling API group "extensions".
I0114 22:21:00.906286  109973 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0114 22:21:00.906429  109973 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.906546  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.906572  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.907092  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.907190  109973 store.go:1350] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0114 22:21:00.907239  109973 reflector.go:188] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0114 22:21:00.907478  109973 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.907585  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.907607  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.908099  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.908336  109973 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0114 22:21:00.908362  109973 master.go:499] Enabling API group "networking.k8s.io".
I0114 22:21:00.908484  109973 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0114 22:21:00.908576  109973 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.909343  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.909374  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.909906  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.909941  109973 store.go:1350] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0114 22:21:00.909975  109973 master.go:499] Enabling API group "node.k8s.io".
I0114 22:21:00.910060  109973 reflector.go:188] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0114 22:21:00.910156  109973 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.910274  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.910292  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.910803  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.910942  109973 store.go:1350] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0114 22:21:00.911010  109973 reflector.go:188] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0114 22:21:00.911107  109973 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.911305  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.911330  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.911824  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.911944  109973 store.go:1350] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0114 22:21:00.911965  109973 master.go:499] Enabling API group "policy".
I0114 22:21:00.911984  109973 reflector.go:188] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0114 22:21:00.912008  109973 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.912117  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.912134  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.912983  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.913031  109973 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0114 22:21:00.913074  109973 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0114 22:21:00.913238  109973 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.913358  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.913380  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.913763  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.914036  109973 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0114 22:21:00.914097  109973 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0114 22:21:00.914095  109973 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.914328  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.914356  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.914861  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.915060  109973 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0114 22:21:00.915118  109973 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0114 22:21:00.915254  109973 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.915386  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.915409  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.915879  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.916132  109973 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0114 22:21:00.916353  109973 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.916423  109973 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0114 22:21:00.916464  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.916490  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.917124  109973 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0114 22:21:00.917157  109973 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0114 22:21:00.917286  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.917366  109973 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.917504  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.917520  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.917956  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.918152  109973 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0114 22:21:00.918211  109973 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0114 22:21:00.918211  109973 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.918334  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.918355  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.918859  109973 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0114 22:21:00.918981  109973 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0114 22:21:00.919047  109973 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.919118  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.919172  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.919194  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.920127  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.920828  109973 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0114 22:21:00.920974  109973 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0114 22:21:00.921443  109973 master.go:499] Enabling API group "rbac.authorization.k8s.io".
I0114 22:21:00.922201  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.923189  109973 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.923291  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.923310  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.923953  109973 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0114 22:21:00.924060  109973 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0114 22:21:00.924328  109973 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.924467  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.924491  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.924997  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.925691  109973 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0114 22:21:00.925713  109973 master.go:499] Enabling API group "scheduling.k8s.io".
I0114 22:21:00.925753  109973 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0114 22:21:00.925830  109973 master.go:488] Skipping disabled API group "settings.k8s.io".
I0114 22:21:00.925994  109973 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.926093  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.926110  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.926546  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.926622  109973 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0114 22:21:00.926669  109973 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0114 22:21:00.926830  109973 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.927120  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.927150  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.927652  109973 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0114 22:21:00.927987  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.928353  109973 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0114 22:21:00.928584  109973 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.928710  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.928737  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.929328  109973 watch_cache.go:409] Replace watchCache (rev: 57237) 
I0114 22:21:00.929479  109973 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0114 22:21:00.929527  109973 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0114 22:21:00.930281  109973 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.930396  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.930428  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.931119  109973 store.go:1350] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0114 22:21:00.931202  109973 reflector.go:188] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0114 22:21:00.931325  109973 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.931442  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.931465  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.932369  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.932377  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.932969  109973 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0114 22:21:00.933025  109973 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0114 22:21:00.933197  109973 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.933300  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.933314  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.933949  109973 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0114 22:21:00.933998  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.934023  109973 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0114 22:21:00.934137  109973 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.934257  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.934277  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.934855  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.935082  109973 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0114 22:21:00.935207  109973 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0114 22:21:00.935215  109973 master.go:499] Enabling API group "storage.k8s.io".
I0114 22:21:00.935375  109973 master.go:488] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I0114 22:21:00.935567  109973 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.935690  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.935702  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.935856  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.936582  109973 store.go:1350] Monitoring deployments.apps count at <storage-prefix>//deployments
I0114 22:21:00.936662  109973 reflector.go:188] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0114 22:21:00.936759  109973 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.936875  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.936898  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.937439  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.937455  109973 store.go:1350] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0114 22:21:00.937526  109973 reflector.go:188] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0114 22:21:00.937660  109973 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.937806  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.937827  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.938262  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.938546  109973 store.go:1350] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0114 22:21:00.938602  109973 reflector.go:188] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0114 22:21:00.938746  109973 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.938852  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.938868  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.939479  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.939499  109973 store.go:1350] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0114 22:21:00.939559  109973 reflector.go:188] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0114 22:21:00.939711  109973 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.939849  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.939877  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.940478  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.940592  109973 store.go:1350] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0114 22:21:00.940620  109973 master.go:499] Enabling API group "apps".
I0114 22:21:00.940630  109973 reflector.go:188] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0114 22:21:00.940807  109973 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.940935  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.940965  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.941688  109973 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0114 22:21:00.941738  109973 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0114 22:21:00.941872  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.941862  109973 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.941996  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.942017  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.942849  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.943245  109973 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0114 22:21:00.943345  109973 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0114 22:21:00.943435  109973 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.943538  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.943561  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.944134  109973 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0114 22:21:00.944327  109973 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0114 22:21:00.944451  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.944469  109973 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.944601  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.944629  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.945157  109973 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0114 22:21:00.945181  109973 master.go:499] Enabling API group "admissionregistration.k8s.io".
I0114 22:21:00.945196  109973 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0114 22:21:00.945224  109973 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.945399  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.945478  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:00.945509  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:00.946369  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.946737  109973 store.go:1350] Monitoring events count at <storage-prefix>//events
I0114 22:21:00.946766  109973 master.go:499] Enabling API group "events.k8s.io".
I0114 22:21:00.946774  109973 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I0114 22:21:00.946971  109973 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.947231  109973 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.947512  109973 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.947597  109973 watch_cache.go:409] Replace watchCache (rev: 57238) 
I0114 22:21:00.947655  109973 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.947815  109973 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.947930  109973 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.948102  109973 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.948376  109973 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.948500  109973 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.948583  109973 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.950124  109973 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.950458  109973 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.951244  109973 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.951510  109973 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.952838  109973 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.953041  109973 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.953654  109973 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.953879  109973 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.954906  109973 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.955167  109973 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:21:00.955224  109973 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0114 22:21:00.955772  109973 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.955893  109973 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.956130  109973 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.957330  109973 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.957955  109973 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.959115  109973 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:21:00.959195  109973 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I0114 22:21:00.959989  109973 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.960283  109973 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.961530  109973 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.962214  109973 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.962464  109973 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.963501  109973 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:21:00.963559  109973 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0114 22:21:00.964570  109973 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.964871  109973 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.965425  109973 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.966442  109973 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.966850  109973 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.967382  109973 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.968509  109973 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.969035  109973 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.969547  109973 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.970807  109973 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.971510  109973 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:21:00.971575  109973 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0114 22:21:00.972101  109973 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.973400  109973 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:21:00.973455  109973 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0114 22:21:00.973975  109973 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.974428  109973 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.974892  109973 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.975625  109973 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.976092  109973 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.976549  109973 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.976932  109973 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.977901  109973 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:21:00.977980  109973 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0114 22:21:00.978709  109973 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.979300  109973 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.979579  109973 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.980735  109973 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.980960  109973 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.981215  109973 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.982140  109973 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.982365  109973 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.982552  109973 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.983094  109973 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.983313  109973 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.983491  109973 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:21:00.983529  109973 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0114 22:21:00.983534  109973 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0114 22:21:00.984587  109973 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.985162  109973 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.986158  109973 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.986651  109973 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.987347  109973 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a0dfeac7-0903-47c8-84b2-2a2b940f4e77", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:21:00.991571  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
W0114 22:21:00.991606  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:21:00.991608  109973 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0114 22:21:00.991623  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:00.991633  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:00.991643  109973 healthz.go:177] healthz check poststarthook/start-cluster-authentication-info-controller failed: not finished
I0114 22:21:00.991650  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/start-cluster-authentication-info-controller failed: reason withheld
healthz check failed
I0114 22:21:00.991684  109973 httplog.go:90] GET /healthz: (303.084µs) 0 [Go-http-client/1.1 127.0.0.1:43850]
I0114 22:21:00.991686  109973 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0114 22:21:00.991696  109973 shared_informer.go:206] Waiting for caches to sync for cluster_authentication_trust_controller
I0114 22:21:00.991976  109973 reflector.go:153] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0114 22:21:00.991995  109973 reflector.go:188] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0114 22:21:00.992493  109973 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.114834ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0114 22:21:00.992860  109973 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0: (403.268µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43850]
I0114 22:21:00.993644  109973 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=57236 labels= fields= timeout=6m21s
I0114 22:21:00.994783  109973 httplog.go:90] GET /api/v1/services: (1.012877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0114 22:21:00.998290  109973 httplog.go:90] GET /api/v1/services: (831.247µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0114 22:21:01.000555  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.000582  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.000593  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.000602  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.000624  109973 httplog.go:90] GET /healthz: (159.231µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.001824  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.845236ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0114 22:21:01.003838  109973 httplog.go:90] GET /api/v1/services: (1.459283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.003840  109973 httplog.go:90] GET /api/v1/services: (968.43µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0114 22:21:01.004602  109973 httplog.go:90] POST /api/v1/namespaces: (1.576916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43858]
I0114 22:21:01.005807  109973 httplog.go:90] GET /api/v1/namespaces/kube-public: (737.733µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.007426  109973 httplog.go:90] POST /api/v1/namespaces: (1.30082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.008754  109973 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (949.207µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.010382  109973 httplog.go:90] POST /api/v1/namespaces: (1.258415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.091886  109973 shared_informer.go:236] caches populated
I0114 22:21:01.091920  109973 shared_informer.go:213] Caches are synced for cluster_authentication_trust_controller 
I0114 22:21:01.093630  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.093662  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.093674  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.093683  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.093715  109973 httplog.go:90] GET /healthz: (1.275565ms) 0 [Go-http-client/1.1 127.0.0.1:43856]
I0114 22:21:01.102884  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.102913  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.102922  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.102928  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.102958  109973 httplog.go:90] GET /healthz: (176.557µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.192534  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.192569  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.192581  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.192590  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.192628  109973 httplog.go:90] GET /healthz: (222.855µs) 0 [Go-http-client/1.1 127.0.0.1:43856]
I0114 22:21:01.202814  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.202841  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.202853  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.202862  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.202899  109973 httplog.go:90] GET /healthz: (195.276µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.293893  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.293926  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.293937  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.293962  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.293994  109973 httplog.go:90] GET /healthz: (206.551µs) 0 [Go-http-client/1.1 127.0.0.1:43856]
I0114 22:21:01.302834  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.302869  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.302882  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.302891  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.302927  109973 httplog.go:90] GET /healthz: (221.101µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.392646  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.392681  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.392693  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.392707  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.392743  109973 httplog.go:90] GET /healthz: (262.231µs) 0 [Go-http-client/1.1 127.0.0.1:43856]
I0114 22:21:01.402865  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.402898  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.402910  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.402918  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.402952  109973 httplog.go:90] GET /healthz: (186.059µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.492462  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.492510  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.492531  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.492540  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.492599  109973 httplog.go:90] GET /healthz: (265.035µs) 0 [Go-http-client/1.1 127.0.0.1:43856]
I0114 22:21:01.502837  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.502871  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.502894  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.502909  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.502941  109973 httplog.go:90] GET /healthz: (237.44µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.592627  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.592687  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.592695  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.592701  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.592748  109973 httplog.go:90] GET /healthz: (265.579µs) 0 [Go-http-client/1.1 127.0.0.1:43856]
I0114 22:21:01.602925  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.602973  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.602986  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.602994  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.603056  109973 httplog.go:90] GET /healthz: (338.514µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.692544  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.692577  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.692589  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.692597  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.692643  109973 httplog.go:90] GET /healthz: (215.618µs) 0 [Go-http-client/1.1 127.0.0.1:43856]
I0114 22:21:01.702876  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.702907  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.702919  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.702928  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.702963  109973 httplog.go:90] GET /healthz: (228.312µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.792496  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.792538  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.792573  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.792580  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.792615  109973 httplog.go:90] GET /healthz: (251.897µs) 0 [Go-http-client/1.1 127.0.0.1:43856]
I0114 22:21:01.802894  109973 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:21:01.802931  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.802948  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.802957  109973 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.803002  109973 httplog.go:90] GET /healthz: (251.476µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.857545  109973 client.go:361] parsed scheme: "endpoint"
I0114 22:21:01.857618  109973 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:21:01.893352  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.893383  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.893392  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.893515  109973 httplog.go:90] GET /healthz: (1.000631ms) 0 [Go-http-client/1.1 127.0.0.1:43856]
I0114 22:21:01.903657  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.903792  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.903841  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.903875  109973 httplog.go:90] GET /healthz: (1.042203ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.993762  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:01.993826  109973 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:21:01.993837  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:01.993845  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.287891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.993876  109973 httplog.go:90] GET /healthz: (1.408954ms) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:01.994269  109973 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical: (1.091324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:01.995197  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (886.913µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.996065  109973 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (1.368902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:01.996471  109973 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0114 22:21:01.996947  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.416542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.998566  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.221242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0114 22:21:01.998577  109973 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical: (1.886659ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:01.999912  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (875.282µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.000454  109973 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (1.432162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.000675  109973 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0114 22:21:02.000698  109973 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0114 22:21:02.001248  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (802.384µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.002434  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (812.857µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.003162  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.003187  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.003222  109973 httplog.go:90] GET /healthz: (649.411µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.003719  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (901.418µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.004999  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (909.83µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.006032  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (652.016µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.007597  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.223947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.007807  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0114 22:21:02.008896  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (914.176µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.010433  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.186725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.010644  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0114 22:21:02.011633  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (767.723µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.013368  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.397382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.013547  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0114 22:21:02.014569  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (793.777µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.015950  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.07274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.016130  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0114 22:21:02.018042  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.494821ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.019615  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.234937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.019826  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0114 22:21:02.020999  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (925.172µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.022497  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.163912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.022769  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0114 22:21:02.023727  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (756.959µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.025428  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.330047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.025724  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0114 22:21:02.026587  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (677.102µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.028387  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.427221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.028595  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0114 22:21:02.029556  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (761.397µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.031297  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.290058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.031634  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0114 22:21:02.032820  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (995.474µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.034748  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.547881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.034967  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0114 22:21:02.035890  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (705.87µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.037783  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.353357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.037984  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0114 22:21:02.039401  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.186025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.041230  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.475177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.041851  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0114 22:21:02.042920  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (843.418µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.044521  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.245854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.044749  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0114 22:21:02.046496  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.572469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.048098  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.216358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.048465  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0114 22:21:02.049723  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.035305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.051412  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.338572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.051623  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0114 22:21:02.052685  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (882.746µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.054572  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.327605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.054782  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0114 22:21:02.055737  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (759.596µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.058617  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.528245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.058797  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0114 22:21:02.059960  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (982.379µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.063244  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.726742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.063502  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0114 22:21:02.065113  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.364801ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.067234  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.523706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.067456  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0114 22:21:02.068998  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.325657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.070901  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.511333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.071188  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0114 22:21:02.072821  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.312135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.074756  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.503269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.074985  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0114 22:21:02.077503  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (834.615µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.079372  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.469501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.079607  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0114 22:21:02.080772  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (938.04µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.082493  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.318157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.082714  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0114 22:21:02.084529  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.542498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.086161  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.250889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.086368  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0114 22:21:02.088672  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (962.087µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.090690  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.616507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.091190  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0114 22:21:02.092451  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.04284ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.093312  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.093331  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.093394  109973 httplog.go:90] GET /healthz: (1.054471ms) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:02.094364  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.497143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.094606  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0114 22:21:02.096462  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.65364ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.098714  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.46052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.098986  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0114 22:21:02.101908  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (2.618173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.104109  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.676475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.104405  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.104502  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.105876  109973 httplog.go:90] GET /healthz: (2.682101ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.105900  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0114 22:21:02.106880  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (701.251µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.108861  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.490213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.109111  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0114 22:21:02.110220  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (807.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.112485  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.854932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.112779  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0114 22:21:02.114411  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.345656ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.116832  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.622165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.117102  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0114 22:21:02.118227  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (892.403µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.121664  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.990722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.121911  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0114 22:21:02.123262  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.034154ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.125987  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.917091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.126291  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0114 22:21:02.127610  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.11677ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.130404  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.767294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.130635  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0114 22:21:02.131703  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (850.998µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.133794  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.592519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.134044  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0114 22:21:02.135202  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (921.07µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.137356  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.780617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.137615  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0114 22:21:02.138693  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (890.391µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.140758  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.580121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.141038  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0114 22:21:02.142618  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.35966ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.144624  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.579308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.144896  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0114 22:21:02.145880  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (805.126µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.147779  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.499526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.148096  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0114 22:21:02.149127  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (696.392µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.150804  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.268289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.151002  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0114 22:21:02.151909  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (704.744µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.153956  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.409828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.154207  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0114 22:21:02.155550  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.097285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.157494  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.585552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.157751  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0114 22:21:02.158701  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (749.118µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.160520  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.454064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.160770  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0114 22:21:02.161810  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (834.832µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.163572  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.289715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.163798  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0114 22:21:02.166135  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (2.105821ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.168502  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.985289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.168700  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0114 22:21:02.170061  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.10077ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.171856  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.374904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.172123  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0114 22:21:02.173484  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (856.507µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.176533  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.537295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.176873  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0114 22:21:02.177764  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (704.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.179456  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.234028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.179686  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0114 22:21:02.180915  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.066382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.184070  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.743308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.184477  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0114 22:21:02.185706  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (774.433µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.187448  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.306335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.187676  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0114 22:21:02.193154  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.51237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.193159  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.193184  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.193215  109973 httplog.go:90] GET /healthz: (855.959µs) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:02.204658  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.204684  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.204715  109973 httplog.go:90] GET /healthz: (2.002438ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.213066  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.45113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.213334  109973 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0114 22:21:02.232815  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.170874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.253691  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.899858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.253940  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0114 22:21:02.272762  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.098577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.293521  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.824727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.294980  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0114 22:21:02.295422  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.295448  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.295483  109973 httplog.go:90] GET /healthz: (3.055816ms) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:02.303548  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.303582  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.303622  109973 httplog.go:90] GET /healthz: (914.29µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.312933  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.244558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.334576  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.792814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.334828  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0114 22:21:02.352882  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.174872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.373531  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.871593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.373874  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0114 22:21:02.392956  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.284208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.393363  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.393389  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.393418  109973 httplog.go:90] GET /healthz: (865.346µs) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:02.403619  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.403656  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.403700  109973 httplog.go:90] GET /healthz: (987.986µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.413705  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.027818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.414085  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0114 22:21:02.432891  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.266188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.454141  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.395318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.454410  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0114 22:21:02.473216  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.442483ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.493747  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.06614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.494000  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0114 22:21:02.494162  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.494185  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.494222  109973 httplog.go:90] GET /healthz: (1.853132ms) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:02.503390  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.503418  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.503486  109973 httplog.go:90] GET /healthz: (765.038µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.512963  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.347006ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.533819  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.106762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.534058  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0114 22:21:02.552907  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.208341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.573315  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.640727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.573555  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0114 22:21:02.593215  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.436046ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.593303  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.593324  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.593386  109973 httplog.go:90] GET /healthz: (855.905µs) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:02.603653  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.603692  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.603723  109973 httplog.go:90] GET /healthz: (931.795µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.613222  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.580172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.613436  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0114 22:21:02.633125  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.407144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.653581  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.865453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.653795  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0114 22:21:02.672886  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.247005ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.693477  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.795147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.693711  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0114 22:21:02.694001  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.694023  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.694060  109973 httplog.go:90] GET /healthz: (1.692221ms) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:02.703601  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.703626  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.703667  109973 httplog.go:90] GET /healthz: (941.845µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.712911  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.24099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.733544  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.869881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.733808  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0114 22:21:02.752947  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.219192ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.773356  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.690847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.773635  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0114 22:21:02.792881  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.185998ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.793182  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.793212  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.793241  109973 httplog.go:90] GET /healthz: (808.768µs) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:02.803464  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.803491  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.803546  109973 httplog.go:90] GET /healthz: (889.498µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.814384  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.587694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.814643  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0114 22:21:02.832843  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.19379ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.853331  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.684914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.853580  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0114 22:21:02.874008  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.155359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.893464  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.842581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:02.893743  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0114 22:21:02.894090  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.894117  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.894149  109973 httplog.go:90] GET /healthz: (1.532427ms) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:02.903394  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.903423  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.903462  109973 httplog.go:90] GET /healthz: (778.595µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.912744  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.104986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.933548  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.816375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.933775  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0114 22:21:02.952749  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.120091ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.973573  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.890577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.974001  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0114 22:21:02.993953  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.253938ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:02.994418  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:02.994444  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:02.994469  109973 httplog.go:90] GET /healthz: (2.029062ms) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:03.004544  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.004594  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.004641  109973 httplog.go:90] GET /healthz: (1.976862ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.014253  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.616717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.014499  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0114 22:21:03.032851  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.259701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.053552  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.901466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.053978  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0114 22:21:03.073159  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.424968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.093891  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.225819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.094130  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0114 22:21:03.094235  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.094257  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.094293  109973 httplog.go:90] GET /healthz: (1.691067ms) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:03.103490  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.103520  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.103574  109973 httplog.go:90] GET /healthz: (849.9µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.112789  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.150004ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.133308  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.767375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.133637  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0114 22:21:03.152848  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.113155ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.173692  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.940117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.173914  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0114 22:21:03.192884  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.184375ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.193027  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.193057  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.193114  109973 httplog.go:90] GET /healthz: (718.04µs) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:03.203536  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.203570  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.203610  109973 httplog.go:90] GET /healthz: (859.548µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.213463  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.817237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.213731  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0114 22:21:03.234991  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (3.301315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.253911  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.201365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.254216  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0114 22:21:03.273325  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.656186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.293226  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.293254  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.293311  109973 httplog.go:90] GET /healthz: (883.364µs) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:03.293679  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.894189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.293910  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0114 22:21:03.303411  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.303439  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.303468  109973 httplog.go:90] GET /healthz: (747.207µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.312730  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.095245ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.334269  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.615496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.334522  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0114 22:21:03.352878  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.16614ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.373657  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.986886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.373931  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0114 22:21:03.393064  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.355016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.393187  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.393304  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.393343  109973 httplog.go:90] GET /healthz: (1.065537ms) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:03.403478  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.403508  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.403564  109973 httplog.go:90] GET /healthz: (854.622µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.413672  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.021874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.414059  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0114 22:21:03.432998  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.332267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.453793  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.074271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.454040  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0114 22:21:03.473072  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.332143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.493593  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.897775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.494240  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0114 22:21:03.494308  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.494330  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.494369  109973 httplog.go:90] GET /healthz: (1.945785ms) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:03.503603  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.503632  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.503667  109973 httplog.go:90] GET /healthz: (949.362µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.512928  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.332448ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.533650  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.9429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.533963  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0114 22:21:03.552637  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (980.957µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.573334  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.677505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.573645  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0114 22:21:03.592959  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.241845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.593216  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.593263  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.593315  109973 httplog.go:90] GET /healthz: (845.918µs) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:03.603382  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.603420  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.603479  109973 httplog.go:90] GET /healthz: (798.411µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.613161  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.542123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.613399  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0114 22:21:03.632734  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.062789ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.653592  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.9203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.653966  109973 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0114 22:21:03.672956  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.273874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.675592  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.120552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.693448  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.784749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.693729  109973 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0114 22:21:03.694077  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.694106  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.694171  109973 httplog.go:90] GET /healthz: (1.796584ms) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:03.703435  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.703470  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.703504  109973 httplog.go:90] GET /healthz: (837.815µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.713959  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (923.736µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.716995  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.559379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.733470  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.849262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.733737  109973 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0114 22:21:03.752832  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.196833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.754484  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.205619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.773723  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.017492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.773958  109973 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0114 22:21:03.792894  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.212588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.793115  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.793150  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.793193  109973 httplog.go:90] GET /healthz: (790.834µs) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:03.795392  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.057293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.803675  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.803712  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.803760  109973 httplog.go:90] GET /healthz: (1.024805ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.813324  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.718121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.813610  109973 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0114 22:21:03.832937  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.148453ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.834460  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.091964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.853979  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.856579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.854864  109973 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0114 22:21:03.872834  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.233649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.874725  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.452302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.893439  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.834021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:03.893803  109973 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0114 22:21:03.894118  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.894144  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.894232  109973 httplog.go:90] GET /healthz: (1.869835ms) 0 [Go-http-client/1.1 127.0.0.1:43860]
I0114 22:21:03.903406  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.903435  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.903474  109973 httplog.go:90] GET /healthz: (829.188µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.912732  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.092565ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.914199  109973 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.036157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.933491  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.83836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.933813  109973 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0114 22:21:03.955283  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.751706ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.957215  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.280929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.973723  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.04968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.973988  109973 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0114 22:21:03.993393  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:03.993443  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:03.993474  109973 httplog.go:90] GET /healthz: (998.101µs) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:03.994484  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.440678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:03.996976  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.978845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.003515  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:04.003540  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:04.003590  109973 httplog.go:90] GET /healthz: (824.584µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.013318  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.724362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.013607  109973 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0114 22:21:04.032797  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.158911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.034441  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.19535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.053397  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.736001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.053629  109973 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0114 22:21:04.073394  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.625591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.075806  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.777328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.093224  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:04.093258  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:04.093383  109973 httplog.go:90] GET /healthz: (992.563µs) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:04.093624  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.805659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.093967  109973 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0114 22:21:04.103443  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:04.103465  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:04.103502  109973 httplog.go:90] GET /healthz: (755.852µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.112718  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.123003ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.114168  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.052231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.133305  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.627999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.133551  109973 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0114 22:21:04.152723  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.06517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.154156  109973 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.027708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.173947  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.276235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.174230  109973 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0114 22:21:04.192952  109973 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.278025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.195380  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:04.195413  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:04.195455  109973 httplog.go:90] GET /healthz: (3.0971ms) 0 [Go-http-client/1.1 127.0.0.1:43916]
I0114 22:21:04.195565  109973 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.569304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.203502  109973 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:21:04.203526  109973 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:21:04.203586  109973 httplog.go:90] GET /healthz: (870.4µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.213322  109973 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.69116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.213582  109973 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0114 22:21:04.293370  109973 httplog.go:90] GET /healthz: (1.036961ms) 200 [Go-http-client/1.1 127.0.0.1:43860]
W0114 22:21:04.294425  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.294467  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.294515  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.294536  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.294554  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:21:04.294590  109973 factory.go:174] Creating scheduler from algorithm provider 'DefaultProvider'
W0114 22:21:04.295743  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.295905  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.295944  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.295996  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:21:04.296028  109973 shared_informer.go:206] Waiting for caches to sync for scheduler
I0114 22:21:04.296418  109973 reflector.go:153] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:200
I0114 22:21:04.296444  109973 reflector.go:188] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:200
I0114 22:21:04.297344  109973 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (543.07µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:04.298060  109973 get.go:251] Starting watch for /api/v1/pods, rv=57236 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m33s
I0114 22:21:04.303611  109973 httplog.go:90] GET /healthz: (898.205µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:04.305218  109973 httplog.go:90] GET /api/v1/namespaces/default: (1.272219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:04.307285  109973 httplog.go:90] POST /api/v1/namespaces: (1.652862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:04.308769  109973 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.044716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:04.313413  109973 httplog.go:90] POST /api/v1/namespaces/default/services: (4.14047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:04.315150  109973 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.105972ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:04.317131  109973 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.584527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
E0114 22:21:04.388442  109973 event_broadcaster.go:247] Unable to write event: 'Patch http://127.0.0.1:42573/apis/events.k8s.io/v1beta1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/events/test-pod.15e9e0c0b7c5b17d: dial tcp 127.0.0.1:42573: connect: connection refused' (may retry after sleeping)
I0114 22:21:04.396347  109973 shared_informer.go:236] caches populated
I0114 22:21:04.396373  109973 shared_informer.go:213] Caches are synced for scheduler 
I0114 22:21:04.396718  109973 reflector.go:153] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396718  109973 reflector.go:153] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396739  109973 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396748  109973 reflector.go:188] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396771  109973 reflector.go:153] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396782  109973 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396884  109973 reflector.go:153] Starting reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396899  109973 reflector.go:153] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396914  109973 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396905  109973 reflector.go:188] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396972  109973 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.396988  109973 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.397615  109973 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (489.385µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:04.397805  109973 httplog.go:90] GET /apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: (331.924µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44182]
I0114 22:21:04.397846  109973 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (509.915µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44178]
I0114 22:21:04.397854  109973 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (370.605µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44180]
I0114 22:21:04.397884  109973 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (405.413µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44176]
I0114 22:21:04.397847  109973 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (437.53µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44184]
I0114 22:21:04.398312  109973 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=57238 labels= fields= timeout=6m45s
I0114 22:21:04.398356  109973 get.go:251] Starting watch for /apis/storage.k8s.io/v1/csinodes, rv=57238 labels= fields= timeout=9m42s
I0114 22:21:04.398479  109973 get.go:251] Starting watch for /api/v1/nodes, rv=57236 labels= fields= timeout=7m38s
I0114 22:21:04.398706  109973 get.go:251] Starting watch for /api/v1/services, rv=57459 labels= fields= timeout=6m2s
I0114 22:21:04.398831  109973 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=57236 labels= fields= timeout=9m23s
I0114 22:21:04.398835  109973 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=57236 labels= fields= timeout=5m29s
I0114 22:21:04.399062  109973 reflector.go:153] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.399085  109973 reflector.go:188] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.400877  109973 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (217.782µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44194]
I0114 22:21:04.401379  109973 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=57237 labels= fields= timeout=7m39s
I0114 22:21:04.496640  109973 shared_informer.go:236] caches populated
I0114 22:21:04.496679  109973 shared_informer.go:236] caches populated
I0114 22:21:04.496686  109973 shared_informer.go:236] caches populated
I0114 22:21:04.496692  109973 shared_informer.go:236] caches populated
I0114 22:21:04.496698  109973 shared_informer.go:236] caches populated
I0114 22:21:04.496704  109973 shared_informer.go:236] caches populated
I0114 22:21:04.496709  109973 shared_informer.go:236] caches populated
I0114 22:21:04.496870  109973 shared_informer.go:236] caches populated
I0114 22:21:04.498862  109973 httplog.go:90] POST /api/v1/namespaces: (1.546133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44196]
I0114 22:21:04.499234  109973 node_lifecycle_controller.go:388] Sending events to api server.
I0114 22:21:04.499315  109973 node_lifecycle_controller.go:423] Controller is using taint based evictions.
W0114 22:21:04.499341  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:21:04.499391  109973 taint_manager.go:162] Sending events to api server.
I0114 22:21:04.499474  109973 node_lifecycle_controller.go:520] Controller will reconcile labels.
W0114 22:21:04.499501  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.499540  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:21:04.499584  109973 node_lifecycle_controller.go:554] Starting node controller
I0114 22:21:04.499604  109973 shared_informer.go:206] Waiting for caches to sync for taint
I0114 22:21:04.499860  109973 reflector.go:153] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.499899  109973 reflector.go:188] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.500766  109973 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (593.104µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44196]
I0114 22:21:04.501576  109973 get.go:251] Starting watch for /api/v1/namespaces, rv=57466 labels= fields= timeout=8m24s
I0114 22:21:04.599727  109973 shared_informer.go:236] caches populated
I0114 22:21:04.599799  109973 shared_informer.go:236] caches populated
I0114 22:21:04.600046  109973 reflector.go:153] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.600071  109973 reflector.go:188] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.600068  109973 reflector.go:153] Starting reflector *v1.Lease (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.600090  109973 reflector.go:188] Listing and watching *v1.Lease from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.600137  109973 reflector.go:153] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.600154  109973 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I0114 22:21:04.602212  109973 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?limit=500&resourceVersion=0: (447.019µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44208]
I0114 22:21:04.602404  109973 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (375.432µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44210]
I0114 22:21:04.602456  109973 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (420.118µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44212]
I0114 22:21:04.602858  109973 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=57237 labels= fields= timeout=6m24s
I0114 22:21:04.603083  109973 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=57238 labels= fields= timeout=7m5s
I0114 22:21:04.603106  109973 get.go:251] Starting watch for /api/v1/pods, rv=57236 labels= fields= timeout=5m35s
I0114 22:21:04.699777  109973 shared_informer.go:236] caches populated
I0114 22:21:04.699806  109973 shared_informer.go:213] Caches are synced for taint 
I0114 22:21:04.699885  109973 taint_manager.go:186] Starting NoExecuteTaintManager
I0114 22:21:04.699950  109973 shared_informer.go:236] caches populated
I0114 22:21:04.699973  109973 shared_informer.go:236] caches populated
I0114 22:21:04.699979  109973 shared_informer.go:236] caches populated
I0114 22:21:04.699983  109973 shared_informer.go:236] caches populated
I0114 22:21:04.699987  109973 shared_informer.go:236] caches populated
I0114 22:21:04.699990  109973 shared_informer.go:236] caches populated
I0114 22:21:04.699994  109973 shared_informer.go:236] caches populated
I0114 22:21:04.700005  109973 shared_informer.go:236] caches populated
I0114 22:21:04.700012  109973 shared_informer.go:236] caches populated
I0114 22:21:04.703111  109973 httplog.go:90] POST /api/v1/nodes: (2.226569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:04.703613  109973 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0114 22:21:04.703646  109973 taint_manager.go:438] Updating known taints on node node-0: []
I0114 22:21:04.704490  109973 node_tree.go:86] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0114 22:21:04.706555  109973 httplog.go:90] POST /api/v1/nodes: (1.75132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:04.706883  109973 node_tree.go:86] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0114 22:21:04.706958  109973 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0114 22:21:04.706979  109973 taint_manager.go:438] Updating known taints on node node-1: []
I0114 22:21:04.715796  109973 httplog.go:90] POST /api/v1/nodes: (8.707514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:04.715929  109973 node_tree.go:86] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0114 22:21:04.715957  109973 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0114 22:21:04.715977  109973 taint_manager.go:438] Updating known taints on node node-2: []
I0114 22:21:04.717968  109973 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/pods: (1.510153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:04.718381  109973 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563", Name:"testpod-1"}
I0114 22:21:04.718430  109973 scheduling_queue.go:839] About to try and schedule pod taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/testpod-1
I0114 22:21:04.718444  109973 scheduler.go:562] Attempting to schedule pod: taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/testpod-1
W0114 22:21:04.718636  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.718664  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:21:04.718674  109973 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:21:04.718844  109973 scheduler_binder.go:278] AssumePodVolumes for pod "taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/testpod-1", node "node-0"
I0114 22:21:04.718869  109973 scheduler_binder.go:288] AssumePodVolumes for pod "taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/testpod-1", node "node-0": all PVCs bound and nothing to do
I0114 22:21:04.718931  109973 factory.go:488] Attempting to bind testpod-1 to node-0
I0114 22:21:04.720759  109973 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/pods/testpod-1/binding: (1.593037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:04.720954  109973 scheduler.go:704] pod taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/testpod-1 is bound successfully on node "node-0", 3 nodes evaluated, 3 nodes were found feasible.
I0114 22:21:04.722575  109973 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563", Name:"testpod-1"}
I0114 22:21:04.723641  109973 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/events: (2.438288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:04.820466  109973 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/pods/testpod-1: (1.751465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:04.822204  109973 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/pods/testpod-1: (1.254857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:04.823738  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.128758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:04.926278  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.768145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.026277  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.562394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.126275  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.517843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.226177  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.604612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.326273  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.485431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.398163  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:05.398263  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:05.398390  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:05.398458  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:05.398484  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:05.398531  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:05.426191  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.534215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.526510  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.864227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.602988  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:05.626156  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.505804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.726295  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.625993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.826322  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.600675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:05.925965  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.374025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.026351  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.668902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.126357  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.688822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.226003  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.477046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.326327  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.563405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.398381  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:06.398381  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:06.398563  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:06.398630  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:06.398649  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:06.398670  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:06.427329  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.830346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.526395  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.698218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.603179  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:06.626159  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.482948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.727258  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.625206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.828136  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.699499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:06.829612  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (4.664889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:06.829869  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (4.929779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44464]
I0114 22:21:06.829962  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (5.006441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44466]
I0114 22:21:06.831462  109973 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (492.823µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:06.834555  109973 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.178628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:06.834842  109973 controller_utils.go:203] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2020-01-14 22:21:06.830758172 +0000 UTC m=+258.849124128,}] Taint to Node node-0
I0114 22:21:06.834894  109973 controller_utils.go:215] Made sure that Node node-0 has no [] Taint
I0114 22:21:06.926052  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.481918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.027047  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.317751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.126110  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.491375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.226813  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.159213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.326168  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.489556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.398585  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:07.398585  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:07.398711  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:07.398817  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:07.398882  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:07.398878  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:07.428112  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.526252  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.550121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.603363  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:07.626294  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.782502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.726212  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.484716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.826444  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.678854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:07.931649  109973 httplog.go:90] GET /api/v1/nodes/node-0: (4.455591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.027770  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.006354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.127780  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.783152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.227014  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.158056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.327495  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.508403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.398810  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:08.398807  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:08.398914  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:08.399018  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:08.399087  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:08.399127  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:08.427942  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.327749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.527826  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.757873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.603637  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:08.628947  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.958303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.729210  109973 httplog.go:90] GET /api/v1/nodes/node-0: (4.25037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.826984  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.097853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.836065  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (5.25044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44216]
I0114 22:21:08.836122  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (5.420965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:08.836136  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (4.905589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:08.836596  109973 cacher.go:782] cacher (*core.Node): 1 objects queued in incoming channel.
I0114 22:21:08.927322  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.57714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.028469  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.229728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.126912  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.06862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.227292  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.607157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.329149  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.649432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.399057  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:09.399074  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:09.399160  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:09.399261  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:09.399323  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:09.399330  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:09.427411  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.135689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.526717  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.007081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.604121  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:09.626545  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.84702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.700044  109973 node_lifecycle_controller.go:787] Controller observed a new Node: "node-0"
I0114 22:21:09.700085  109973 controller_utils.go:167] Recording Registered Node node-0 in Controller event message for node node-0
I0114 22:21:09.700159  109973 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: region1:�:zone1
I0114 22:21:09.700401  109973 node_lifecycle_controller.go:787] Controller observed a new Node: "node-1"
I0114 22:21:09.700418  109973 controller_utils.go:167] Recording Registered Node node-1 in Controller event message for node node-1
I0114 22:21:09.700433  109973 node_lifecycle_controller.go:787] Controller observed a new Node: "node-2"
I0114 22:21:09.700439  109973 controller_utils.go:167] Recording Registered Node node-2 in Controller event message for node node-2
W0114 22:21:09.700496  109973 node_lifecycle_controller.go:1058] Missing timestamp for Node node-0. Assuming now as a timestamp.
I0114 22:21:09.700610  109973 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"c24d9bf6-5cd5-4843-b4b0-332b9c4659bd", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0114 22:21:09.700659  109973 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"6109815c-9049-4826-85fd-c116ba8814af", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0114 22:21:09.700721  109973 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"7ca87173-9f3c-4694-a352-a66bd1f407ee", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0114 22:21:09.701933  109973 node_lifecycle_controller.go:1137] node node-0 hasn't been updated for 1.409062ms. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2020-01-14 22:21:08 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0114 22:21:09.702024  109973 node_lifecycle_controller.go:1127] Condition MemoryPressure of node node-0 was never updated by kubelet
I0114 22:21:09.702035  109973 node_lifecycle_controller.go:1127] Condition DiskPressure of node node-0 was never updated by kubelet
I0114 22:21:09.702044  109973 node_lifecycle_controller.go:1127] Condition PIDPressure of node node-0 was never updated by kubelet
I0114 22:21:09.705171  109973 httplog.go:90] POST /api/v1/namespaces/default/events: (2.976775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.707158  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (4.1291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:09.707500  109973 httplog.go:90] POST /api/v1/namespaces/default/events: (1.861579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.707601  109973 node_lifecycle_controller.go:886] Node node-0 is NotReady as of 2020-01-14 22:21:09.707560745 +0000 UTC m=+261.725926727. Adding it to the Taint queue.
W0114 22:21:09.707680  109973 node_lifecycle_controller.go:1058] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0114 22:21:09.707728  109973 node_lifecycle_controller.go:1058] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0114 22:21:09.707756  109973 node_lifecycle_controller.go:1259] Controller detected that zone region1:�:zone1 is now in state Normal.
I0114 22:21:09.709846  109973 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (425.606µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:09.709946  109973 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (433.361µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44766]
I0114 22:21:09.711882  109973 httplog.go:90] POST /api/v1/namespaces/default/events: (3.927722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:09.714050  109973 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.554786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44766]
I0114 22:21:09.714436  109973 controller_utils.go:203] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2020-01-14 22:21:09.707890231 +0000 UTC m=+261.726256213,}] Taint to Node node-0
I0114 22:21:09.714475  109973 store.go:365] GuaranteedUpdate of /a0dfeac7-0903-47c8-84b2-2a2b940f4e77/minions/node-0 failed because of a conflict, going to retry
I0114 22:21:09.715195  109973 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (534.63µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44766]
I0114 22:21:09.716620  109973 httplog.go:90] PATCH /api/v1/nodes/node-0: (4.980874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:09.716963  109973 controller_utils.go:203] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2020-01-14 22:21:09.708468023 +0000 UTC m=+261.726834008,}] Taint to Node node-0
I0114 22:21:09.717006  109973 controller_utils.go:215] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0114 22:21:09.717292  109973 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0114 22:21:09.717316  109973 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/unreachable  NoExecute 2020-01-14 22:21:09 +0000 UTC}]
I0114 22:21:09.717371  109973 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/testpod-1 at 2020-01-14 22:21:09.717359673 +0000 UTC m=+261.735725667 to be fired at 2020-01-14 22:26:09.717359673 +0000 UTC m=+561.735725667
I0114 22:21:09.718727  109973 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.338335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44766]
I0114 22:21:09.719078  109973 controller_utils.go:215] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2020-01-14 22:21:06 +0000 UTC,}] Taint
I0114 22:21:09.719187  109973 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0114 22:21:09.719210  109973 taint_manager.go:438] Updating known taints on node node-0: []
I0114 22:21:09.719223  109973 taint_manager.go:459] All taints were removed from the Node node-0. Cancelling all evictions...
I0114 22:21:09.719233  109973 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/testpod-1 at 2020-01-14 22:21:09.719229753 +0000 UTC m=+261.737595752
I0114 22:21:09.719279  109973 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563", Name:"testpod-1", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/testpod-1
I0114 22:21:09.721120  109973 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/events: (1.560797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:09.726020  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.473641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:09.827710  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.014776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:09.926603  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.953159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.026345  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.711393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.126298  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.611608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.227413  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.784663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.326327  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.660517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.399269  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:10.399292  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:10.399275  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:10.399466  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:10.399616  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:10.399656  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:10.426199  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.578473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.527091  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.459677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.604469  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:10.626344  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.680762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.726274  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.621384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.826095  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.563528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.839932  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.543847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:10.841650  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (3.874143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44832]
I0114 22:21:10.841650  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (4.264135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.843586  109973 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (426.975µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.846401  109973 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.867553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.846707  109973 controller_utils.go:203] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2020-01-14 22:21:10.842892649 +0000 UTC m=+262.861258635,}] Taint to Node node-0
I0114 22:21:10.847418  109973 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (499.648µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.850182  109973 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.028749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:10.850415  109973 controller_utils.go:215] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2020-01-14 22:21:09 +0000 UTC,}] Taint
I0114 22:21:10.926149  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.696535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.026015  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.413638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.127406  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.746612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.226102  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.475926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.326298  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.646897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.399487  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:11.399513  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:11.399538  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:11.399625  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:11.399782  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:11.399796  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:11.428671  109973 httplog.go:90] GET /api/v1/nodes/node-0: (4.047451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.526053  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.504514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.604683  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:11.626155  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.538512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.726280  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.611094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.826252  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.581959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:11.925871  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.276628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.026141  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.53931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.126533  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.103728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.226088  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.469349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.326009  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.331432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.399706  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:12.399713  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:12.399713  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:12.399902  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:12.399930  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:12.399936  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:12.425920  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.310561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.527757  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.075012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.604935  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:12.628714  109973 httplog.go:90] GET /api/v1/nodes/node-0: (4.177665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.725928  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.308325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.826055  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.475566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.843321  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.316542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.846427  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.839418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:12.847234  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (3.453945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44586]
I0114 22:21:12.926362  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.700085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.025941  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.440547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.126330  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.686154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.227530  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.59873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.327368  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.602192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.399923  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:13.399917  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:13.399944  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:13.400058  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:13.400096  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:13.400098  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:13.426180  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.500856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.526046  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.444502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.605132  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:13.627532  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.856089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.726197  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.464135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.826130  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.493712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:13.926173  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.519384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.025901  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.315824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.126061  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.360058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.226009  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.395095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.306375  109973 httplog.go:90] GET /api/v1/namespaces/default: (1.97074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.308130  109973 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.234593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.309856  109973 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.0785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.326024  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.450419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.400319  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:14.400483  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:14.400490  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:14.400498  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:14.400792  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:14.400799  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:14.426098  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.531051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.526054  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.39023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.605364  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:14.626085  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.474977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.707970  109973 node_lifecycle_controller.go:1084] ReadyCondition for Node node-0 transitioned from &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2020-01-14 22:21:08 +0000 UTC,LastTransitionTime:2020-01-14 22:21:09.701924266 +0000 UTC m=+261.720290232,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,} to &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2020-01-14 22:21:12 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0114 22:21:14.708045  109973 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I0114 22:21:14.708097  109973 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I0114 22:21:14.708123  109973 node_lifecycle_controller.go:1092] Node node-2 ReadyCondition updated. Updating timestamp.
I0114 22:21:14.727116  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.486884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.826354  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.688757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.846737  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.554279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.849117  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.962832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:14.850091  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.043462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:14.927018  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.484602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.025976  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.40016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.128718  109973 httplog.go:90] GET /api/v1/nodes/node-0: (4.147539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.226219  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.486857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.326102  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.382955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.400513  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:15.400651  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:15.400666  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:15.400676  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:15.400968  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:15.400991  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:15.426170  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.599284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.525899  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.438144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.605579  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:15.626076  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.466363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.726340  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.653319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.727918  109973 request.go:853] Got a Retry-After 1s response for attempt 1 to http://127.0.0.1:42573/api/v1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/pods/test-pod
I0114 22:21:15.826200  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.533505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:15.926159  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.540407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.026338  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.632864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.126059  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.407196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.225951  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.327503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.326073  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.444088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
E0114 22:21:16.380534  109973 event_broadcaster.go:247] Unable to write event: 'Patch http://127.0.0.1:42573/apis/events.k8s.io/v1beta1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/events/test-pod.15e9e0c0b7c5b17d: dial tcp 127.0.0.1:42573: connect: connection refused' (may retry after sleeping)
E0114 22:21:16.380578  109973 event_broadcaster.go:197] Unable to write event '&v1beta1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"test-pod.15e9e0c0b7c5b17d", GenerateName:"", Namespace:"permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432", SelfLink:"/apis/events.k8s.io/v1beta1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/events/test-pod.15e9e0c0b7c5b17d", UID:"c194fc3e-71d6-476d-a54b-cad58a19eb1e", ResourceVersion:"30189", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714637062, loc:(*time.Location)(0x739b120)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0x2938b048, ext:63714637062, loc:(*time.Location)(0x739b120)}}, Series:(*v1beta1.EventSeries)(0xc0503dd290), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-60e818c2-3718-11ea-8603-da2f7a5855b4", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432", Name:"test-pod", UID:"a211734e-8da0-420c-8b63-52a82cd945df", APIVersion:"v1", ResourceVersion:"30188", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"pod \"test-pod\" rejected while waiting at permit: rejectAllPods", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"default-scheduler", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}' (retry limit exceeded!)
I0114 22:21:16.400641  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:16.400814  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:16.400825  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:16.400860  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:16.401167  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:16.401167  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:16.426013  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.459294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.526243  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.573412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.605864  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:16.626358  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.555334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.726226  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.591878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.728455  109973 request.go:853] Got a Retry-After 1s response for attempt 2 to http://127.0.0.1:42573/api/v1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/pods/test-pod
I0114 22:21:16.825950  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.350918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.850090  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.582337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.853084  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (3.303234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:16.853821  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.912468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:16.926947  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.425595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.026177  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.582388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.126055  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.504068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.226157  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.480359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.326005  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.416782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.400820  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:17.400968  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:17.401062  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:17.401317  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:17.401319  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:17.401443  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:17.426192  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.563945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.525895  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.405908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.606103  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:17.625946  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.407901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.726010  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.461026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.728874  109973 request.go:853] Got a Retry-After 1s response for attempt 3 to http://127.0.0.1:42573/api/v1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/pods/test-pod
I0114 22:21:17.826063  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.374402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:17.925993  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.392997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.026094  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.398062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.126023  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.476029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.225896  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.251005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.326029  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.522979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.401018  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:18.401118  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:18.401244  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:18.401508  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:18.401517  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:18.401627  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:18.426116  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.510093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.525966  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.364829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.606322  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:18.627577  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.792089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.726206  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.567215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.729429  109973 request.go:853] Got a Retry-After 1s response for attempt 4 to http://127.0.0.1:42573/api/v1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/pods/test-pod
I0114 22:21:18.826815  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.009918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.853349  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.31382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.857976  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (3.143409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:18.859065  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (4.432166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:18.926241  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.52368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.026350  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.651508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.126209  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.506551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.227273  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.453938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.326456  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.80228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.401194  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:19.401304  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:19.401427  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:19.401640  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:19.401751  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:19.401786  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:19.428670  109973 httplog.go:90] GET /api/v1/nodes/node-0: (4.030166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.527067  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.445214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.606529  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:19.626191  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.606795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.708406  109973 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I0114 22:21:19.708489  109973 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I0114 22:21:19.708533  109973 node_lifecycle_controller.go:1092] Node node-2 ReadyCondition updated. Updating timestamp.
I0114 22:21:19.726138  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.546911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.729911  109973 request.go:853] Got a Retry-After 1s response for attempt 5 to http://127.0.0.1:42573/api/v1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/pods/test-pod
I0114 22:21:19.826258  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.610243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:19.846670  109973 request.go:853] Got a Retry-After 1s response for attempt 1 to http://127.0.0.1:39407/api/v1/namespaces/permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/pods/signalling-pod
I0114 22:21:19.926226  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.571834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.025931  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.331738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.126317  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.572934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.227326  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.665082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.326151  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.489229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.401298  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:20.401456  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:20.401641  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:20.401783  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:20.401918  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:20.401966  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:20.426351  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.676057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.526290  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.593037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.606736  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:20.626341  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.604974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.726529  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.774505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.730389  109973 request.go:853] Got a Retry-After 1s response for attempt 6 to http://127.0.0.1:42573/api/v1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/pods/test-pod
I0114 22:21:20.826034  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.450137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.847153  109973 request.go:853] Got a Retry-After 1s response for attempt 2 to http://127.0.0.1:39407/api/v1/namespaces/permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/pods/signalling-pod
I0114 22:21:20.857473  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.289777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.861906  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.541396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:20.863637  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (3.957637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:20.926105  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.488854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.026193  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.544664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.126129  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.528804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.226110  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.474535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.326523  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.51464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.401518  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:21.401642  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:21.401839  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:21.402020  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:21.402081  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:21.402143  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:21.427564  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.037062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.526225  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.602358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.606956  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:21.626227  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.619487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.726279  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.604634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.730857  109973 request.go:853] Got a Retry-After 1s response for attempt 7 to http://127.0.0.1:42573/api/v1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/pods/test-pod
I0114 22:21:21.826256  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.723811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:21.847749  109973 request.go:853] Got a Retry-After 1s response for attempt 3 to http://127.0.0.1:39407/api/v1/namespaces/permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/pods/signalling-pod
I0114 22:21:21.926167  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.593124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.027355  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.6685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.126159  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.536157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.226092  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.474341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.326266  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.401724  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:22.401826  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:22.402007  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:22.402205  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:22.402243  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:22.402275  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:22.426432  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.749922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.526252  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.58441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.607170  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:22.626340  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.604079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.726105  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.440984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.731501  109973 request.go:853] Got a Retry-After 1s response for attempt 8 to http://127.0.0.1:42573/api/v1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/pods/test-pod
I0114 22:21:22.826243  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.604008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.848596  109973 request.go:853] Got a Retry-After 1s response for attempt 4 to http://127.0.0.1:39407/api/v1/namespaces/permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/pods/signalling-pod
I0114 22:21:22.860513  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.168254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.864767  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.184752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:22.866624  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.954201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:22.926332  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.609884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.025990  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.341445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.126298  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.667949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.226545  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.814673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.326497  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.739682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.402008  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:23.402029  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:23.402320  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:23.402221  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:23.402401  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:23.402413  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:23.426475  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.805631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.527510  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.848867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.607414  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:23.626319  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.771423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.727259  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.69855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.732056  109973 request.go:853] Got a Retry-After 1s response for attempt 9 to http://127.0.0.1:42573/api/v1/namespaces/permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/pods/test-pod
I0114 22:21:23.826167  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.517126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:23.849106  109973 request.go:853] Got a Retry-After 1s response for attempt 5 to http://127.0.0.1:39407/api/v1/namespaces/permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/pods/signalling-pod
I0114 22:21:23.927214  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.754914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.027714  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.946221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.126441  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.732387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.226257  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.537023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.306101  109973 httplog.go:90] GET /api/v1/namespaces/default: (1.51231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.307855  109973 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.283005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.310053  109973 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.253699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.326258  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.580523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.402382  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:24.402476  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:24.402565  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:24.402624  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:24.402386  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:24.402625  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:24.429785  109973 httplog.go:90] GET /api/v1/nodes/node-0: (5.227339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.526435  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.72747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.607619  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:24.626449  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.672886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.708840  109973 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I0114 22:21:24.708938  109973 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I0114 22:21:24.710313  109973 node_lifecycle_controller.go:1092] Node node-2 ReadyCondition updated. Updating timestamp.
I0114 22:21:24.726264  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.581902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
E0114 22:21:24.732709  109973 factory.go:472] Error getting pod permit-pluginsed3fcc41-a870-4618-9a54-3ef27cfea432/test-pod for retry: an error on the server ("") has prevented the request from succeeding (get pods test-pod); retrying...
I0114 22:21:24.826131  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.529014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.849606  109973 request.go:853] Got a Retry-After 1s response for attempt 6 to http://127.0.0.1:39407/api/v1/namespaces/permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/pods/signalling-pod
I0114 22:21:24.863726  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.372993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.867209  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.773149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:24.869135  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.883123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:24.926345  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.667359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.026349  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.724575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.126186  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.555456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.226085  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.543641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.326168  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.53481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.402751  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:25.402760  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:25.402795  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:25.402800  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:25.402806  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:25.402883  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:25.426336  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.800856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.526367  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.692101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.607848  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:25.626345  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.668859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.726406  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.66475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.829158  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.895272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:25.850204  109973 request.go:853] Got a Retry-After 1s response for attempt 7 to http://127.0.0.1:39407/api/v1/namespaces/permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/pods/signalling-pod
I0114 22:21:25.926481  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.774071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.026674  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.947019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.126639  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.884776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.227280  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.518019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.326468  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.810136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.402983  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:26.403000  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:26.403012  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:26.402985  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:26.402983  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:26.403300  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:26.426437  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.737066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.526390  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.678068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.608066  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:26.626644  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.718259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.726478  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.513157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.826518  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.796686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.850762  109973 request.go:853] Got a Retry-After 1s response for attempt 8 to http://127.0.0.1:39407/api/v1/namespaces/permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/pods/signalling-pod
I0114 22:21:26.868470  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.710627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.870686  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.820845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:26.871827  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.084158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:26.926163  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.605839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.026206  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.543756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.126353  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.662762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.226334  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.620804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.326398  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.687365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.403173  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:27.403220  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:27.403173  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:27.403278  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:27.403228  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:27.403428  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:27.426632  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.682962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.526571  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.916733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.608396  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:27.627676  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.021582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.726370  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.658356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.826458  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.730096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:27.851279  109973 request.go:853] Got a Retry-After 1s response for attempt 9 to http://127.0.0.1:39407/api/v1/namespaces/permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/pods/signalling-pod
I0114 22:21:27.926325  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.67041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.026253  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.552211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.126433  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.70578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.226262  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.612967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.327135  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.612042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.403406  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:28.403769  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:28.403405  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:28.403414  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:28.403413  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:28.403431  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:28.426706  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.750807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.527491  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.736298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.608634  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:28.626553  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.780931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.726737  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.819045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.826288  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.625177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
E0114 22:21:28.851929  109973 factory.go:472] Error getting pod permit-plugin24cae72e-1d85-4efb-b65a-a53c507b85cb/signalling-pod for retry: an error on the server ("") has prevented the request from succeeding (get pods signalling-pod); retrying...
I0114 22:21:28.872864  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.451549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:28.874160  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.68265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:28.876010  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.977016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46540]
I0114 22:21:28.926419  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.810328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.027650  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.032834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.126329  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.706691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.226224  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.547172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.326309  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.719056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.403831  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:29.404037  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:29.404329  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:29.404348  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:29.404351  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:29.404360  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:29.426424  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.617246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.527687  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.516948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.608840  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:29.626364  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.448305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.710703  109973 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I0114 22:21:29.710815  109973 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I0114 22:21:29.710844  109973 node_lifecycle_controller.go:1092] Node node-2 ReadyCondition updated. Updating timestamp.
I0114 22:21:29.727524  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.701674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.826236  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.560196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:29.926205  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.651927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.025886  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.423691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.126273  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.615019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.226314  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.680416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.326468  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.763792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.404012  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:30.404335  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:30.404469  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:30.404518  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:30.404548  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:30.404607  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:30.426201  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.677269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.526193  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.705524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.609043  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:30.628076  109973 httplog.go:90] GET /api/v1/nodes/node-0: (3.298191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.726255  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.683862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.827598  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.956953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.877546  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.548501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:30.879830  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.03574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:30.879830  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (4.763566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0114 22:21:30.926156  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.624042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.026425  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.759136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.126225  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.589401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.226278  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.665456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.326423  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.686902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.404241  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:31.404511  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:31.404637  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:31.404641  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:31.404725  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:31.404754  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:31.426123  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.488894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.526483  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.778052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.609226  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:31.626264  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.636378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.727604  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.963107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.826749  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.57133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:31.926217  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.557752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.026169  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.561071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.126795  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.022018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.225780  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.307509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.325901  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.290623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.404549  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:32.404706  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:32.404789  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:32.404808  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:32.404812  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:32.404905  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:32.425985  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.391112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.526388  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.838621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.609397  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:32.625945  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.340345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.726105  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.646353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.826367  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.647772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.881870  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.250373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.884056  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.829855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:32.886866  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (5.947006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:32.926050  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.430345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.026288  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.678269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.126219  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.570402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.226218  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.561825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.326274  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.609503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.404755  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:33.404896  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:33.404907  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:33.405043  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:33.404951  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:33.404953  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:33.426151  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.531667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.526038  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.441561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.609599  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:33.626056  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.410723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.726218  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.632342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.825964  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.325302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:33.926109  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.451669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.026106  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.432747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.125989  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.520606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.226386  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.745443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.307295  109973 httplog.go:90] GET /api/v1/namespaces/default: (2.648876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.309315  109973 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.445304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.311927  109973 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.189673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.326342  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.710877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.405055  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:34.405148  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:34.405148  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:34.405236  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:34.405253  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:34.405265  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:34.426490  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.839427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.526420  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.7652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.609804  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:34.627616  109973 httplog.go:90] GET /api/v1/nodes/node-0: (2.939117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.711123  109973 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I0114 22:21:34.711204  109973 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I0114 22:21:34.711263  109973 node_lifecycle_controller.go:1092] Node node-2 ReadyCondition updated. Updating timestamp.
I0114 22:21:34.726029  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.311036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.826064  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.54155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.827856  109973 httplog.go:90] GET /api/v1/nodes/node-0: (1.272683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
Jan 14 22:21:34.828: INFO: Waiting up to 15s for pod "testpod-1" in namespace "taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563" to be "updated with tolerationSeconds=300"
I0114 22:21:34.829765  109973 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/pods/testpod-1: (1.234606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
Jan 14 22:21:34.830: INFO: Pod "testpod-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.676936ms
Jan 14 22:21:34.830: INFO: Pod "testpod-1" satisfied condition "updated with tolerationSeconds=300"
I0114 22:21:34.834839  109973 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/pods/testpod-1: (4.495666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.835041  109973 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563", Name:"testpod-1"}
I0114 22:21:34.837795  109973 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsb497bd6c-7f88-47d0-b089-e6c338eb8563/pods/testpod-1: (1.076803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.843302  109973 node_tree.go:100] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I0114 22:21:34.843369  109973 taint_manager.go:422] Noticed node deletion: "node-0"
I0114 22:21:34.847358  109973 node_tree.go:100] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I0114 22:21:34.847437  109973 taint_manager.go:422] Noticed node deletion: "node-1"
I0114 22:21:34.850187  109973 node_tree.go:100] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I0114 22:21:34.850191  109973 httplog.go:90] DELETE /api/v1/nodes: (11.94379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.850206  109973 taint_manager.go:422] Noticed node deletion: "node-2"
I0114 22:21:34.883975  109973 httplog.go:90] PUT /api/v1/nodes/node-0/status: (1.391598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.887508  109973 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.516035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44462]
I0114 22:21:34.889760  109973 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.137813ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:35.405263  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:35.405354  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:35.405390  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:35.405340  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:35.405439  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:35.405449  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:35.610000  109973 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I0114 22:21:35.850801  109973 node_lifecycle_controller.go:601] Shutting down node controller
I0114 22:21:35.850870  109973 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=57237&timeout=6m24s&timeoutSeconds=384&watch=true: (31.24818374s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44210]
I0114 22:21:35.850888  109973 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=57236&timeout=5m29s&timeoutSeconds=329&watch=true: (31.4522842s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44190]
I0114 22:21:35.851035  109973 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=57236&timeout=5m35s&timeoutSeconds=335&watch=true: (31.248148731s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44208]
I0114 22:21:35.851035  109973 httplog.go:90] GET /apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=57238&timeout=7m5s&timeoutSeconds=425&watch=true: (31.248201604s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44212]
I0114 22:21:35.851067  109973 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=57237&timeout=7m39s&timeoutSeconds=459&watch=true: (31.449762545s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44194]
I0114 22:21:35.851093  109973 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=57238&timeout=6m45s&timeoutSeconds=405&watch=true: (31.453057734s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44182]
I0114 22:21:35.851180  109973 httplog.go:90] GET /api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=57466&timeout=8m24s&timeoutSeconds=504&watch=true: (31.349805586s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44196]
I0114 22:21:35.851194  109973 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=57236&timeout=7m38s&timeoutSeconds=458&watch=true: (31.452917858s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0114 22:21:35.851205  109973 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=57236&timeout=9m23s&timeoutSeconds=563&watch=true: (31.452620727s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44192]
I0114 22:21:35.851183  109973 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=57236&timeoutSeconds=393&watch=true: (31.553383847s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0114 22:21:35.851225  109973 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=57459&timeout=6m2s&timeoutSeconds=362&watch=true: (31.452618392s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44188]
I0114 22:21:35.851225  109973 httplog.go:90] GET /apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=57238&timeout=9m42s&timeoutSeconds=582&watch=true: (31.452987509s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44176]
I0114 22:21:35.852129  109973 httplog.go:90] DELETE /api/v1/nodes: (1.266396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:35.852355  109973 controller.go:180] Shutting down kubernetes service endpoint reconciler
I0114 22:21:35.855026  109973 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.379672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:35.857116  109973 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.655508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46692]
I0114 22:21:35.857387  109973 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0114 22:21:35.857497  109973 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=57236&timeout=6m21s&timeoutSeconds=381&watch=true: (34.864122273s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43850]
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations (35.00s)
        taint_test.go:814: Failed to taint node in test 1 <node-0>, err: timed out waiting for the condition

				from junit_20200114-221042.xml

Find update mentions in log files | View test history on testgrid


Show 2609 Passed Tests

Show 4 Skipped Tests