This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 2 failed / 2866 succeeded
Started2019-09-19 01:46
Elapsed26m50s
Revision
Buildergke-prow-ssd-pool-1a225945-gzk6
Refs master:652f4016
82644:cfa77048
82726:860845b5
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/de881a28-b798-497c-8667-512e40f4f6f9/targets/test'}}
pod21012d41-da7f-11e9-9b9c-be2383461c70
resultstorehttps://source.cloud.google.com/results/invocations/de881a28-b798-497c-8667-512e40f4f6f9/targets/test
infra-commitfe9f237a8
pod21012d41-da7f-11e9-9b9c-be2383461c70
repok8s.io/kubernetes
repo-commit063904e5b53d2b068f4d49517bb1babdfddecba6
repos{u'k8s.io/kubernetes': u'master:652f4016d9c934f87121ea32ca700d5687a19029,82644:cfa77048f9b7fa96bad4eebb059cf60acc9dfd29,82726:860845b58216fa7fa058bacd06b4be5caefa6251'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 1m5s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0919 02:11:05.753532  108857 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true TaintNodesByCondition:true]}
I0919 02:11:05.753682  108857 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
--- FAIL: TestTaintBasedEvictions (65.22s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-020103.xml

Filter through log files


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds
W0919 02:11:05.754515  108857 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 02:11:05.754531  108857 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 02:11:05.754542  108857 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 02:11:05.754552  108857 master.go:259] Using reconciler: 
I0919 02:11:05.757835  108857 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.758127  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.758231  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.759350  108857 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 02:11:05.759403  108857 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.759559  108857 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 02:11:05.760213  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.760241  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.761203  108857 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 02:11:05.761240  108857 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 02:11:05.761242  108857 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.761405  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.761428  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.761826  108857 watch_cache.go:405] Replace watchCache (rev: 52022) 
I0919 02:11:05.763090  108857 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 02:11:05.763125  108857 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.763244  108857 watch_cache.go:405] Replace watchCache (rev: 52023) 
I0919 02:11:05.763265  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.763282  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.763353  108857 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 02:11:05.764131  108857 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 02:11:05.764351  108857 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.764571  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.764594  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.764685  108857 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 02:11:05.765860  108857 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 02:11:05.765960  108857 watch_cache.go:405] Replace watchCache (rev: 52024) 
I0919 02:11:05.766033  108857 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.766200  108857 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 02:11:05.766661  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.766683  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.767742  108857 watch_cache.go:405] Replace watchCache (rev: 52028) 
I0919 02:11:05.768249  108857 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 02:11:05.768425  108857 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.768559  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.768581  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.768668  108857 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 02:11:05.769235  108857 watch_cache.go:405] Replace watchCache (rev: 52028) 
I0919 02:11:05.769747  108857 watch_cache.go:405] Replace watchCache (rev: 52029) 
I0919 02:11:05.769761  108857 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 02:11:05.769943  108857 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.770071  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.770088  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.770158  108857 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 02:11:05.772025  108857 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 02:11:05.772187  108857 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.772815  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.772853  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.772954  108857 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 02:11:05.776062  108857 watch_cache.go:405] Replace watchCache (rev: 52034) 
I0919 02:11:05.776348  108857 watch_cache.go:405] Replace watchCache (rev: 52034) 
I0919 02:11:05.776743  108857 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 02:11:05.776913  108857 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.777075  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.777103  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.777197  108857 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 02:11:05.778871  108857 watch_cache.go:405] Replace watchCache (rev: 52039) 
I0919 02:11:05.779157  108857 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 02:11:05.779321  108857 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.779466  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.779509  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.779596  108857 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 02:11:05.781509  108857 watch_cache.go:405] Replace watchCache (rev: 52042) 
I0919 02:11:05.781959  108857 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 02:11:05.782041  108857 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 02:11:05.782922  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.783067  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.783088  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.783292  108857 watch_cache.go:405] Replace watchCache (rev: 52049) 
I0919 02:11:05.784860  108857 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 02:11:05.785030  108857 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.785170  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.785191  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.785267  108857 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 02:11:05.786689  108857 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 02:11:05.786772  108857 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 02:11:05.786848  108857 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.787005  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.787052  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.787989  108857 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 02:11:05.788030  108857 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.788163  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.788182  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.788253  108857 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 02:11:05.788840  108857 watch_cache.go:405] Replace watchCache (rev: 52056) 
I0919 02:11:05.789116  108857 watch_cache.go:405] Replace watchCache (rev: 52056) 
I0919 02:11:05.789583  108857 watch_cache.go:405] Replace watchCache (rev: 52056) 
I0919 02:11:05.791143  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.791175  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.794961  108857 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.795120  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.795143  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.795697  108857 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 02:11:05.795725  108857 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 02:11:05.796218  108857 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.796465  108857 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.797199  108857 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.797562  108857 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 02:11:05.798420  108857 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.798719  108857 watch_cache.go:405] Replace watchCache (rev: 52064) 
I0919 02:11:05.799992  108857 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.800740  108857 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.801447  108857 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.801547  108857 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.801665  108857 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.802295  108857 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.803035  108857 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.803848  108857 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.804757  108857 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.805108  108857 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.805821  108857 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.806175  108857 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.807127  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.807407  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.807672  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.807922  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.808041  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.808122  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.808214  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.808981  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.809279  108857 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.810036  108857 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.810840  108857 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.811557  108857 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.812062  108857 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.813441  108857 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.814063  108857 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.815204  108857 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.816908  108857 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.842913  108857 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.843820  108857 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.844318  108857 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.844712  108857 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 02:11:05.844848  108857 master.go:461] Enabling API group "authentication.k8s.io".
I0919 02:11:05.844924  108857 master.go:461] Enabling API group "authorization.k8s.io".
I0919 02:11:05.845179  108857 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.845512  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.845613  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.846750  108857 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 02:11:05.846947  108857 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.846962  108857 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 02:11:05.847098  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.847114  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.848040  108857 watch_cache.go:405] Replace watchCache (rev: 52102) 
I0919 02:11:05.848106  108857 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 02:11:05.848259  108857 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.848313  108857 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 02:11:05.848401  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.848423  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.849204  108857 watch_cache.go:405] Replace watchCache (rev: 52102) 
I0919 02:11:05.851102  108857 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 02:11:05.851122  108857 master.go:461] Enabling API group "autoscaling".
I0919 02:11:05.851203  108857 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 02:11:05.851267  108857 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.851476  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.851503  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.852319  108857 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 02:11:05.852411  108857 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 02:11:05.852510  108857 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.852649  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.852668  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.852668  108857 watch_cache.go:405] Replace watchCache (rev: 52105) 
I0919 02:11:05.853533  108857 watch_cache.go:405] Replace watchCache (rev: 52105) 
I0919 02:11:05.853687  108857 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 02:11:05.853709  108857 master.go:461] Enabling API group "batch".
I0919 02:11:05.853854  108857 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.853991  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.854009  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.854085  108857 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 02:11:05.854900  108857 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 02:11:05.854931  108857 master.go:461] Enabling API group "certificates.k8s.io".
I0919 02:11:05.855075  108857 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.855204  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.855222  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.855288  108857 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 02:11:05.855294  108857 watch_cache.go:405] Replace watchCache (rev: 52106) 
I0919 02:11:05.858286  108857 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 02:11:05.858466  108857 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.858482  108857 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 02:11:05.858554  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.858566  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.859255  108857 watch_cache.go:405] Replace watchCache (rev: 52109) 
I0919 02:11:05.859766  108857 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 02:11:05.859786  108857 master.go:461] Enabling API group "coordination.k8s.io".
I0919 02:11:05.859801  108857 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 02:11:05.859931  108857 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 02:11:05.859969  108857 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.860083  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.860103  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.860418  108857 watch_cache.go:405] Replace watchCache (rev: 52109) 
I0919 02:11:05.860788  108857 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 02:11:05.860815  108857 master.go:461] Enabling API group "extensions".
I0919 02:11:05.860959  108857 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.861006  108857 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 02:11:05.861103  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.861121  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.861236  108857 watch_cache.go:405] Replace watchCache (rev: 52109) 
I0919 02:11:05.861766  108857 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 02:11:05.861902  108857 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.862042  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.862061  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.862068  108857 watch_cache.go:405] Replace watchCache (rev: 52109) 
I0919 02:11:05.862124  108857 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 02:11:05.863140  108857 watch_cache.go:405] Replace watchCache (rev: 52109) 
I0919 02:11:05.863281  108857 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 02:11:05.863300  108857 master.go:461] Enabling API group "networking.k8s.io".
I0919 02:11:05.863320  108857 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 02:11:05.863328  108857 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.863448  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.863461  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.865183  108857 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 02:11:05.865199  108857 master.go:461] Enabling API group "node.k8s.io".
I0919 02:11:05.865256  108857 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 02:11:05.865358  108857 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.865553  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.865572  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.866190  108857 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 02:11:05.866340  108857 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.866535  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.866571  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.866654  108857 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 02:11:05.866674  108857 watch_cache.go:405] Replace watchCache (rev: 52111) 
I0919 02:11:05.867670  108857 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 02:11:05.867695  108857 master.go:461] Enabling API group "policy".
I0919 02:11:05.867700  108857 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 02:11:05.867728  108857 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.867962  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.867988  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.868159  108857 watch_cache.go:405] Replace watchCache (rev: 52112) 
I0919 02:11:05.868191  108857 watch_cache.go:405] Replace watchCache (rev: 52112) 
I0919 02:11:05.869029  108857 watch_cache.go:405] Replace watchCache (rev: 52114) 
I0919 02:11:05.869998  108857 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 02:11:05.870171  108857 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.870304  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.870324  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.871329  108857 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 02:11:05.872225  108857 watch_cache.go:405] Replace watchCache (rev: 52115) 
I0919 02:11:05.885657  108857 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 02:11:05.885710  108857 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.885750  108857 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 02:11:05.885841  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.885863  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.887177  108857 watch_cache.go:405] Replace watchCache (rev: 52126) 
I0919 02:11:05.888134  108857 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 02:11:05.888206  108857 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 02:11:05.888344  108857 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.888614  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.888640  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.889596  108857 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 02:11:05.889626  108857 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 02:11:05.889659  108857 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.889802  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.889825  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.890306  108857 watch_cache.go:405] Replace watchCache (rev: 52126) 
I0919 02:11:05.891243  108857 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 02:11:05.891469  108857 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.891595  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.891611  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.891644  108857 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 02:11:05.892935  108857 watch_cache.go:405] Replace watchCache (rev: 52130) 
I0919 02:11:05.893766  108857 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 02:11:05.893809  108857 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.893994  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.894023  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.894108  108857 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 02:11:05.894474  108857 watch_cache.go:405] Replace watchCache (rev: 52130) 
I0919 02:11:05.896354  108857 watch_cache.go:405] Replace watchCache (rev: 52132) 
I0919 02:11:05.906096  108857 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 02:11:05.906245  108857 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 02:11:05.906281  108857 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.906489  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.906516  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.907179  108857 watch_cache.go:405] Replace watchCache (rev: 52140) 
I0919 02:11:05.907605  108857 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 02:11:05.907632  108857 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 02:11:05.907659  108857 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 02:11:05.908583  108857 watch_cache.go:405] Replace watchCache (rev: 52142) 
I0919 02:11:05.909508  108857 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.909662  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.909682  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.910202  108857 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 02:11:05.910329  108857 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 02:11:05.910330  108857 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.910673  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.910739  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.911024  108857 watch_cache.go:405] Replace watchCache (rev: 52142) 
I0919 02:11:05.911593  108857 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 02:11:05.911608  108857 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 02:11:05.911756  108857 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 02:11:05.911795  108857 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 02:11:05.911866  108857 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.911962  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.912027  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.912957  108857 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 02:11:05.913137  108857 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.913274  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.913300  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.913410  108857 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 02:11:05.914092  108857 watch_cache.go:405] Replace watchCache (rev: 52144) 
I0919 02:11:05.914581  108857 watch_cache.go:405] Replace watchCache (rev: 52144) 
I0919 02:11:05.914637  108857 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 02:11:05.914666  108857 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.914725  108857 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 02:11:05.914763  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.914775  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.917203  108857 watch_cache.go:405] Replace watchCache (rev: 52144) 
I0919 02:11:05.917224  108857 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 02:11:05.917265  108857 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.917395  108857 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 02:11:05.917430  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.917451  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.918038  108857 watch_cache.go:405] Replace watchCache (rev: 52145) 
I0919 02:11:05.918188  108857 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 02:11:05.918265  108857 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 02:11:05.918332  108857 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.918445  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.918463  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.920040  108857 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 02:11:05.920096  108857 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 02:11:05.920219  108857 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.920336  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.920357  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.921525  108857 watch_cache.go:405] Replace watchCache (rev: 52146) 
I0919 02:11:05.922986  108857 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 02:11:05.923010  108857 master.go:461] Enabling API group "storage.k8s.io".
I0919 02:11:05.923232  108857 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.923384  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.923406  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.923536  108857 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 02:11:05.924043  108857 watch_cache.go:405] Replace watchCache (rev: 52148) 
I0919 02:11:05.924807  108857 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 02:11:05.924954  108857 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.925056  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.925070  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.925306  108857 watch_cache.go:405] Replace watchCache (rev: 52148) 
I0919 02:11:05.925356  108857 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 02:11:05.926104  108857 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 02:11:05.926171  108857 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 02:11:05.926252  108857 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.926401  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.926424  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.927002  108857 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 02:11:05.927109  108857 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 02:11:05.927142  108857 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.927294  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.927296  108857 watch_cache.go:405] Replace watchCache (rev: 52150) 
I0919 02:11:05.927315  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.927428  108857 watch_cache.go:405] Replace watchCache (rev: 52150) 
I0919 02:11:05.927737  108857 watch_cache.go:405] Replace watchCache (rev: 52150) 
I0919 02:11:05.928434  108857 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 02:11:05.928520  108857 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 02:11:05.928732  108857 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.928871  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.928888  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.929637  108857 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 02:11:05.929669  108857 master.go:461] Enabling API group "apps".
I0919 02:11:05.929701  108857 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.929822  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.929848  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.929917  108857 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 02:11:05.930220  108857 watch_cache.go:405] Replace watchCache (rev: 52150) 
I0919 02:11:05.930401  108857 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 02:11:05.930436  108857 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 02:11:05.930453  108857 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.930584  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.930603  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.931503  108857 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 02:11:05.931555  108857 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 02:11:05.931578  108857 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.931722  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.931760  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.931900  108857 watch_cache.go:405] Replace watchCache (rev: 52152) 
I0919 02:11:05.932432  108857 watch_cache.go:405] Replace watchCache (rev: 52152) 
I0919 02:11:05.933223  108857 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 02:11:05.933271  108857 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 02:11:05.933437  108857 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.933691  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.933715  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.933352  108857 watch_cache.go:405] Replace watchCache (rev: 52152) 
I0919 02:11:05.934763  108857 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 02:11:05.934785  108857 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 02:11:05.934817  108857 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.935015  108857 watch_cache.go:405] Replace watchCache (rev: 52153) 
I0919 02:11:05.935090  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:05.935109  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:05.935191  108857 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 02:11:05.936224  108857 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 02:11:05.936253  108857 master.go:461] Enabling API group "events.k8s.io".
I0919 02:11:05.936319  108857 watch_cache.go:405] Replace watchCache (rev: 52155) 
I0919 02:11:05.936515  108857 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 02:11:05.936505  108857 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.936806  108857 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.937073  108857 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.937197  108857 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.937319  108857 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.937430  108857 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.937607  108857 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.937697  108857 watch_cache.go:405] Replace watchCache (rev: 52155) 
I0919 02:11:05.937826  108857 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.938018  108857 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.938203  108857 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.939039  108857 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.939260  108857 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.940146  108857 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.940468  108857 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.941294  108857 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.941597  108857 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.942252  108857 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.942467  108857 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.942986  108857 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.943210  108857 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 02:11:05.943257  108857 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 02:11:05.943854  108857 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.943978  108857 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.944152  108857 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.944730  108857 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.945185  108857 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.945738  108857 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.945959  108857 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.946742  108857 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.947502  108857 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.947697  108857 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.948272  108857 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 02:11:05.948317  108857 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 02:11:05.949031  108857 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.949280  108857 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.949794  108857 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.950353  108857 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.950667  108857 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.951150  108857 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.951718  108857 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.952131  108857 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.952473  108857 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.952945  108857 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.953472  108857 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 02:11:05.953574  108857 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 02:11:05.954024  108857 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.954474  108857 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 02:11:05.954545  108857 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 02:11:05.955171  108857 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.955666  108857 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.955893  108857 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.956411  108857 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.956852  108857 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.957344  108857 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.957997  108857 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 02:11:05.958141  108857 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 02:11:05.959297  108857 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.959841  108857 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.960018  108857 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.960714  108857 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.960963  108857 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.961195  108857 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.961867  108857 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.962109  108857 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.962344  108857 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.963053  108857 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.963281  108857 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.963592  108857 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 02:11:05.963650  108857 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 02:11:05.963659  108857 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 02:11:05.964152  108857 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.964793  108857 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.965544  108857 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.965954  108857 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.966511  108857 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f71c5adb-cbfc-4836-8690-4fecd1c91abd", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 02:11:05.969430  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:05.969474  108857 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 02:11:05.969482  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:05.969490  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:05.969496  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:05.969501  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:05.969525  108857 httplog.go:90] GET /healthz: (245.431µs) 0 [Go-http-client/1.1 127.0.0.1:60226]
I0919 02:11:05.971105  108857 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.365831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60228]
I0919 02:11:05.973998  108857 httplog.go:90] GET /api/v1/services: (1.055023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60228]
I0919 02:11:05.977662  108857 httplog.go:90] GET /api/v1/services: (1.052958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60228]
I0919 02:11:05.979516  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:05.979546  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:05.979558  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:05.979568  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:05.979575  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:05.979598  108857 httplog.go:90] GET /healthz: (192.135µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60228]
I0919 02:11:05.980418  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (925.559µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60226]
I0919 02:11:05.981024  108857 httplog.go:90] GET /api/v1/services: (918.621µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60228]
I0919 02:11:05.982308  108857 httplog.go:90] GET /api/v1/services: (1.108755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:05.982316  108857 httplog.go:90] POST /api/v1/namespaces: (1.551573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60226]
I0919 02:11:05.984191  108857 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.053503ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:05.986905  108857 httplog.go:90] POST /api/v1/namespaces: (1.959601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:05.988563  108857 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.359859ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:05.996673  108857 httplog.go:90] POST /api/v1/namespaces: (7.197859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.035004  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:06.035029  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:06.035006  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:06.035215  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:06.037143  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:06.037174  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:06.039396  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:06.070393  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.070453  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.070467  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.070476  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.070485  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.070520  108857 httplog.go:90] GET /healthz: (289.449µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:06.080260  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.080290  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.080303  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.080312  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.080321  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.080347  108857 httplog.go:90] GET /healthz: (222.861µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.170326  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.170390  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.170404  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.170414  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.170422  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.170463  108857 httplog.go:90] GET /healthz: (287.7µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:06.180307  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.180346  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.180359  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.180394  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.180403  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.180448  108857 httplog.go:90] GET /healthz: (300.191µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.270389  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.270444  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.270461  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.270472  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.270480  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.270525  108857 httplog.go:90] GET /healthz: (307.914µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:06.280261  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.280295  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.280313  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.280322  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.280330  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.280394  108857 httplog.go:90] GET /healthz: (259.087µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.370273  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.370310  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.370322  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.370331  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.370337  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.370382  108857 httplog.go:90] GET /healthz: (229.463µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:06.381021  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.381051  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.381059  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.381066  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.381072  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.381097  108857 httplog.go:90] GET /healthz: (253.43µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.470243  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.470273  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.470282  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.470289  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.470294  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.470334  108857 httplog.go:90] GET /healthz: (215.526µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:06.480222  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.480247  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.480255  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.480262  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.480268  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.480290  108857 httplog.go:90] GET /healthz: (180.317µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.570352  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.570414  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.570428  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.570437  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.570446  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.570482  108857 httplog.go:90] GET /healthz: (272.787µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:06.580298  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.580335  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.580348  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.580358  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.580383  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.580419  108857 httplog.go:90] GET /healthz: (268.678µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.670572  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.670612  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.670624  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.670636  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.670650  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.670684  108857 httplog.go:90] GET /healthz: (313.083µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:06.680238  108857 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 02:11:06.680267  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.680291  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.680301  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.680309  108857 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.680341  108857 httplog.go:90] GET /healthz: (246.537µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.754457  108857 client.go:361] parsed scheme: "endpoint"
I0919 02:11:06.754525  108857 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 02:11:06.771131  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.771161  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.771171  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.771180  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.771216  108857 httplog.go:90] GET /healthz: (1.000809ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:06.781092  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.781118  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.781125  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.781131  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.781169  108857 httplog.go:90] GET /healthz: (1.013252ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.871012  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.871042  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.871052  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.871083  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.871122  108857 httplog.go:90] GET /healthz: (942.676µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:06.881527  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.881561  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.881572  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.881582  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.881637  108857 httplog.go:90] GET /healthz: (1.443087ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.970427  108857 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (968.864µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.970935  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (793.425µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60590]
I0919 02:11:06.971050  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.299131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60228]
I0919 02:11:06.972041  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.972060  108857 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 02:11:06.972070  108857 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 02:11:06.972078  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 02:11:06.972107  108857 httplog.go:90] GET /healthz: (1.074001ms) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:06.972831  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (986.054µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60590]
I0919 02:11:06.973399  108857 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.058061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.973552  108857 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 02:11:06.973576  108857 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.992257ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60228]
I0919 02:11:06.973892  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (703.367µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60590]
I0919 02:11:06.974846  108857 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (962.138µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.975062  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (864.726µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60590]
I0919 02:11:06.975428  108857 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.294529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60228]
I0919 02:11:06.977142  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.318781ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60590]
I0919 02:11:06.977843  108857 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.383771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.977987  108857 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 02:11:06.978000  108857 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 02:11:06.978276  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (773.581µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60228]
I0919 02:11:06.980902  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:06.980927  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:06.980963  108857 httplog.go:90] GET /healthz: (918.558µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:06.984049  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (5.242052ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.985399  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (882.965µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.986742  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (986.709µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.988013  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (932.14µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.990028  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.619733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.990223  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 02:11:06.991133  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (719.363µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.993256  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.640766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.993436  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 02:11:06.994502  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (813.1µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.996697  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.666743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:06.996915  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 02:11:06.998713  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.493131ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.000656  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.424411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.000839  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 02:11:07.002753  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.7561ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.006091  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.722577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.006292  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 02:11:07.007763  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.208992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.009757  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.612714ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.010015  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 02:11:07.010909  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (736.691µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.013193  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.739777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.013476  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 02:11:07.016922  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (3.260946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.018941  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.609917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.019187  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 02:11:07.022234  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.768125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.024737  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.850931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.025153  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 02:11:07.026236  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (834.094µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.029801  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.012462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.030115  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 02:11:07.031638  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.35206ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.033684  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.14905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.033845  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 02:11:07.034794  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (703.565µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.035177  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:07.035187  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:07.035204  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:07.035531  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:07.037293  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.122161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.037493  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:07.037527  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:07.037554  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 02:11:07.038935  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.219447ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.039817  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:07.041322  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.029627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.042630  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 02:11:07.043938  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (973.651µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.045617  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.28921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.045964  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 02:11:07.047022  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (861.314µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.049325  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.960873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.049660  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 02:11:07.050858  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (916.42µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.054474  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.253929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.054827  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 02:11:07.056102  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (836.697µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.062022  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.24605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.062353  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 02:11:07.064044  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.455507ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.067954  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.422173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.068209  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 02:11:07.069425  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (967.071µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.071202  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.071280  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.071537  108857 httplog.go:90] GET /healthz: (1.356613ms) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:07.072682  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.646545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.073052  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 02:11:07.074537  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.170602ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.076797  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.719202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.076960  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 02:11:07.078055  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (871.565µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.079866  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.080086  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 02:11:07.081232  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.081260  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.081321  108857 httplog.go:90] GET /healthz: (716.313µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.081352  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.065821ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.083931  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.085397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.084188  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 02:11:07.085406  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (867.076µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.086944  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.201912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.087321  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 02:11:07.088434  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (865.99µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.090297  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.516032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.090544  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 02:11:07.093085  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.786941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.095738  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.123736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.095918  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 02:11:07.097243  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.136901ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.101216  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.548897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.102608  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 02:11:07.105748  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.662171ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.111532  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.88881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.111755  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 02:11:07.113798  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.283824ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.117067  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.451266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.117467  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 02:11:07.118699  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.029028ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.121866  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.597184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.122269  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 02:11:07.123906  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.281481ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.127511  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.278952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.127834  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 02:11:07.129762  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.619807ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.131824  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.548839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.132074  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 02:11:07.133940  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.664497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.135856  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.435053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.136058  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 02:11:07.137347  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.049568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.139339  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.515119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.139765  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 02:11:07.141194  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.035084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.143393  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.51839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.143683  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 02:11:07.144942  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.046972ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.146942  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.614433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.147226  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 02:11:07.148220  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (735.611µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.150212  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.566375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.150469  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 02:11:07.151346  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (707.397µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.153423  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.153625  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 02:11:07.154729  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (919.867µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.156540  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.156744  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 02:11:07.157686  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (809.369µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.159299  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.258035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.159652  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 02:11:07.160933  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (959.585µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.163771  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.03248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.163961  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 02:11:07.164963  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (809.84µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.166792  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.450735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.167036  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 02:11:07.169004  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.729044ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.171004  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.508253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.171095  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.171426  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.171618  108857 httplog.go:90] GET /healthz: (1.539639ms) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:07.171764  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 02:11:07.173050  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (986.972µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.176821  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.729718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.177236  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 02:11:07.178285  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (773.509µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.180861  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.180883  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.180907  108857 httplog.go:90] GET /healthz: (876.39µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.185166  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.149833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.185434  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 02:11:07.186847  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.16384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.188768  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.557021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.189026  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 02:11:07.190333  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.113798ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.192294  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.46114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.193815  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 02:11:07.194972  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (879.226µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.197072  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.505436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.197264  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 02:11:07.198445  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (936.953µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.200634  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.796198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.200979  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 02:11:07.202064  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (842.194µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.204143  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.57444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.204404  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 02:11:07.205559  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (976.604µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.208252  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.721712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.208504  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 02:11:07.209540  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (848.411µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.211763  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.531043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.211975  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 02:11:07.213608  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.397188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.234786  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.778836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.235112  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 02:11:07.251283  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.378401ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.272181  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.291163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.272574  108857 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 02:11:07.273104  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.273464  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.273764  108857 httplog.go:90] GET /healthz: (2.721286ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:07.280976  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.281013  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.281067  108857 httplog.go:90] GET /healthz: (935.969µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.290835  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.015448ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.312725  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.81943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.313006  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 02:11:07.333738  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (3.548134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.351753  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.766083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.352832  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 02:11:07.371141  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.27034ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.371404  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.371443  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.371479  108857 httplog.go:90] GET /healthz: (1.355426ms) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:07.381456  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.381483  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.381522  108857 httplog.go:90] GET /healthz: (1.007018ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.392023  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.190039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.392271  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 02:11:07.415660  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (2.768023ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.433721  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.709182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.434020  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 02:11:07.451068  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.170375ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.471869  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.471901  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.471960  108857 httplog.go:90] GET /healthz: (1.876605ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:07.472237  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.227529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.472503  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 02:11:07.481212  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.481240  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.481273  108857 httplog.go:90] GET /healthz: (1.080615ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.491023  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.183719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.512347  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.300398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.512667  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 02:11:07.531140  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.226267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.552868  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.007005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.553123  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 02:11:07.571580  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.571612  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.571684  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.847885ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.571874  108857 httplog.go:90] GET /healthz: (1.643187ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:07.581147  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.581334  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.581575  108857 httplog.go:90] GET /healthz: (1.40699ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.592150  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.286287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.592582  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 02:11:07.611071  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.215299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.632015  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.164402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.633092  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 02:11:07.651055  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.171387ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.671720  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.671754  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.671807  108857 httplog.go:90] GET /healthz: (1.788441ms) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:07.672810  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.781045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.673097  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 02:11:07.681400  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.681438  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.681521  108857 httplog.go:90] GET /healthz: (1.394888ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.691133  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.236569ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.712312  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.434492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.712560  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 02:11:07.731079  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.147211ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.752802  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.808869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.753050  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 02:11:07.771558  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.791832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:07.772298  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.772321  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.772352  108857 httplog.go:90] GET /healthz: (2.26522ms) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:07.781219  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.781350  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.781604  108857 httplog.go:90] GET /healthz: (1.320477ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.792427  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.590384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.793625  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 02:11:07.811285  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.308334ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.832217  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.334835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.832662  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 02:11:07.851282  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.409988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.871027  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.871060  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.871096  108857 httplog.go:90] GET /healthz: (970.413µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:07.872157  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.302119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.872496  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 02:11:07.881213  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.881413  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.881745  108857 httplog.go:90] GET /healthz: (1.603759ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.891189  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.308293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.912225  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.285511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.912545  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 02:11:07.930810  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (958.661µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.952157  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.259134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.952952  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 02:11:07.971175  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.971210  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.971260  108857 httplog.go:90] GET /healthz: (1.056921ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:07.971405  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.559172ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.980979  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:07.981008  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:07.981037  108857 httplog.go:90] GET /healthz: (946.037µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.992837  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.960438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:07.993093  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 02:11:08.011576  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.558429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.032543  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.538244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.032811  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 02:11:08.035454  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:08.035579  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:08.035642  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:08.035703  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:08.037674  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:08.037693  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:08.040004  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:08.051978  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.989873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.072862  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.930713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.073279  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 02:11:08.081981  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.082161  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.082459  108857 httplog.go:90] GET /healthz: (2.295543ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.088234  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.088494  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.088835  108857 httplog.go:90] GET /healthz: (18.665966ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:08.092328  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.564657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.115716  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.586534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.116018  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 02:11:08.131723  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.538078ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.152842  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.954147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.153093  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 02:11:08.171357  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.437206ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.171829  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.171854  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.171888  108857 httplog.go:90] GET /healthz: (1.733085ms) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:08.181313  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.181345  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.181414  108857 httplog.go:90] GET /healthz: (1.256875ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.192975  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.025788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.193259  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 02:11:08.211185  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.184931ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.232056  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.092006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.232331  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 02:11:08.252182  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (2.276958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.272117  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.22587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.272191  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.272214  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.272251  108857 httplog.go:90] GET /healthz: (2.106732ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:08.272321  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 02:11:08.281563  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.281600  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.281640  108857 httplog.go:90] GET /healthz: (1.317056ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.291165  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.249535ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.311700  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.84901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.312170  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 02:11:08.331048  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.251584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.351493  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.570265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.351728  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 02:11:08.370962  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.370995  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.371048  108857 httplog.go:90] GET /healthz: (714.613µs) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:08.371850  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.01239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.381030  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.381084  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.381129  108857 httplog.go:90] GET /healthz: (1.001714ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.391684  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.879391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.391921  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 02:11:08.410847  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (979.925µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.431996  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.117425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.432192  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 02:11:08.451196  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.328635ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.471150  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.471190  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.471229  108857 httplog.go:90] GET /healthz: (1.134737ms) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:08.473215  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.373208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.473513  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 02:11:08.481047  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.481074  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.481110  108857 httplog.go:90] GET /healthz: (1.019093ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.491557  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.71874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.512350  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.452266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.512618  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 02:11:08.531412  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.557409ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.560007  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.09899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.560528  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 02:11:08.570980  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.571012  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.571032  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.141839ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:08.571047  108857 httplog.go:90] GET /healthz: (815.349µs) 0 [Go-http-client/1.1 127.0.0.1:60592]
I0919 02:11:08.580955  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.580991  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.581044  108857 httplog.go:90] GET /healthz: (933.665µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.591803  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.950227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.591996  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 02:11:08.611513  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.641297ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.631721  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.798494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.631974  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 02:11:08.650783  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (981.979µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.670879  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.671182  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.671484  108857 httplog.go:90] GET /healthz: (1.374871ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:08.671561  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.669996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.671986  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 02:11:08.680964  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.680998  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.681045  108857 httplog.go:90] GET /healthz: (958.629µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.690818  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (973.206µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.711664  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.782759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.712191  108857 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 02:11:08.731113  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.240011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.732680  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.084706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.752059  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.222908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.754977  108857 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 02:11:08.771954  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.771984  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.772030  108857 httplog.go:90] GET /healthz: (1.88559ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:08.772099  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.995028ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.774539  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.028575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.782265  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.782290  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.782335  108857 httplog.go:90] GET /healthz: (1.294998ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.791718  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.874086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.791928  108857 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 02:11:08.811639  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.706724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.813290  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.186982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.831838  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.935364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.832103  108857 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 02:11:08.851293  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.4659ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.853094  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.377938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.871404  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.871435  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.871471  108857 httplog.go:90] GET /healthz: (1.359271ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:08.871770  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.916837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.871946  108857 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 02:11:08.880997  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.881023  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.881067  108857 httplog.go:90] GET /healthz: (1.013574ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.891108  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.255414ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.892842  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.075472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.911997  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.176031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.912241  108857 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 02:11:08.930828  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (983.012µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.932386  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.133383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.952217  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.348348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.952529  108857 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 02:11:08.971552  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.686456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.971809  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.971838  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.971869  108857 httplog.go:90] GET /healthz: (1.268607ms) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:08.973072  108857 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.058586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.980988  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:08.981030  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:08.981074  108857 httplog.go:90] GET /healthz: (1.004299ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.991734  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.823671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:08.992016  108857 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 02:11:09.011820  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.979743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.013580  108857 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.35768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.032012  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.163991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.032293  108857 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 02:11:09.035647  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:09.035773  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:09.035812  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:09.035888  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:09.037833  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:09.037864  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:09.040177  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:09.051208  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.345197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.052908  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.074721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.070780  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:09.070813  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:09.070850  108857 httplog.go:90] GET /healthz: (769.74µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:09.071504  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.624475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.071708  108857 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 02:11:09.081032  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:09.081055  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:09.081084  108857 httplog.go:90] GET /healthz: (796.502µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.091775  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.531757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.093595  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.00803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.111776  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.881959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.112027  108857 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 02:11:09.133912  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (4.04447ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.136747  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.160239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.152581  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.590655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.152843  108857 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 02:11:09.170839  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:09.170869  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:09.170901  108857 httplog.go:90] GET /healthz: (866.875µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:09.170904  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.006397ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.172198  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (915.445µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.182625  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:09.182652  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:09.182705  108857 httplog.go:90] GET /healthz: (2.6184ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.191711  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.836686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.191914  108857 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 02:11:09.211128  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.245161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.212640  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.039752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.231807  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.955691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.232036  108857 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 02:11:09.251041  108857 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.06505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.253114  108857 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.391748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.270902  108857 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 02:11:09.270927  108857 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 02:11:09.270954  108857 httplog.go:90] GET /healthz: (858.847µs) 0 [Go-http-client/1.1 127.0.0.1:60230]
I0919 02:11:09.272050  108857 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.196654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.272302  108857 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 02:11:09.280978  108857 httplog.go:90] GET /healthz: (858.209µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.282300  108857 httplog.go:90] GET /api/v1/namespaces/default: (1.014527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.284902  108857 httplog.go:90] POST /api/v1/namespaces: (1.918658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.286538  108857 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.250421ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.291432  108857 httplog.go:90] POST /api/v1/namespaces/default/services: (4.358053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.293129  108857 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.276629ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.308920  108857 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (15.228271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.371390  108857 httplog.go:90] GET /healthz: (1.226646ms) 200 [Go-http-client/1.1 127.0.0.1:60592]
W0919 02:11:09.372357  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372421  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372462  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372473  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372506  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372516  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372526  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372535  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372547  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372562  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372572  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.372652  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 02:11:09.372672  108857 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 02:11:09.372682  108857 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 02:11:09.372913  108857 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 02:11:09.373122  108857 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 02:11:09.373142  108857 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 02:11:09.374110  108857 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (642.934µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60592]
I0919 02:11:09.374873  108857 get.go:251] Starting watch for /api/v1/pods, rv=52056 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m1s
I0919 02:11:09.473078  108857 shared_informer.go:227] caches populated
I0919 02:11:09.473102  108857 shared_informer.go:204] Caches are synced for scheduler 
I0919 02:11:09.473444  108857 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473475  108857 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473503  108857 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473529  108857 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473550  108857 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473631  108857 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473645  108857 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473830  108857 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473847  108857 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473910  108857 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473921  108857 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.474042  108857 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.474064  108857 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.474211  108857 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.474227  108857 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.474434  108857 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.474450  108857 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.473485  108857 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.474527  108857 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (693.461µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:09.474778  108857 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.474792  108857 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.475240  108857 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (435.412µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33048]
I0919 02:11:09.475299  108857 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (398.252µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33064]
I0919 02:11:09.475478  108857 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (397.454µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60230]
I0919 02:11:09.475561  108857 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (363.7µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33068]
I0919 02:11:09.475629  108857 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (300.723µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33050]
I0919 02:11:09.475777  108857 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (254.448µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33058]
I0919 02:11:09.475978  108857 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=52029 labels= fields= timeout=5m30s
I0919 02:11:09.475989  108857 get.go:251] Starting watch for /api/v1/nodes, rv=52049 labels= fields= timeout=8m18s
I0919 02:11:09.476023  108857 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (309.297µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33062]
I0919 02:11:09.476288  108857 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=52150 labels= fields= timeout=5m26s
I0919 02:11:09.476414  108857 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=52034 labels= fields= timeout=8m50s
I0919 02:11:09.476612  108857 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=52150 labels= fields= timeout=7m36s
I0919 02:11:09.476539  108857 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=52112 labels= fields= timeout=7m28s
I0919 02:11:09.476646  108857 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=52145 labels= fields= timeout=8m54s
I0919 02:11:09.476677  108857 get.go:251] Starting watch for /api/v1/services, rv=53444 labels= fields= timeout=7m40s
I0919 02:11:09.477201  108857 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (349.144µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33052]
I0919 02:11:09.478006  108857 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.754091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33054]
I0919 02:11:09.478243  108857 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=52064 labels= fields= timeout=9m37s
I0919 02:11:09.478605  108857 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=52148 labels= fields= timeout=5m7s
I0919 02:11:09.573445  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573476  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573481  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573486  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573501  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573507  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573513  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573526  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573531  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573540  108857 shared_informer.go:227] caches populated
I0919 02:11:09.573550  108857 shared_informer.go:227] caches populated
I0919 02:11:09.576019  108857 httplog.go:90] POST /api/v1/namespaces: (1.841925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33372]
I0919 02:11:09.576314  108857 node_lifecycle_controller.go:331] Sending events to api server.
I0919 02:11:09.576412  108857 node_lifecycle_controller.go:364] Controller is using taint based evictions.
W0919 02:11:09.576440  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 02:11:09.576518  108857 taint_manager.go:162] Sending events to api server.
I0919 02:11:09.576584  108857 node_lifecycle_controller.go:458] Controller will reconcile labels.
W0919 02:11:09.576612  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 02:11:09.576628  108857 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 02:11:09.576713  108857 node_lifecycle_controller.go:495] Starting node controller
I0919 02:11:09.576738  108857 shared_informer.go:197] Waiting for caches to sync for taint
I0919 02:11:09.576918  108857 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.576932  108857 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.577959  108857 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (775.774µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33372]
I0919 02:11:09.578830  108857 get.go:251] Starting watch for /api/v1/namespaces, rv=53583 labels= fields= timeout=9m44s
I0919 02:11:09.676823  108857 shared_informer.go:227] caches populated
I0919 02:11:09.676880  108857 shared_informer.go:227] caches populated
I0919 02:11:09.676888  108857 shared_informer.go:227] caches populated
I0919 02:11:09.676894  108857 shared_informer.go:227] caches populated
I0919 02:11:09.676900  108857 shared_informer.go:227] caches populated
I0919 02:11:09.676905  108857 shared_informer.go:227] caches populated
I0919 02:11:09.677170  108857 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.677191  108857 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.677591  108857 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.677611  108857 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.677763  108857 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.677783  108857 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0919 02:11:09.678749  108857 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (551.336µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33416]
I0919 02:11:09.678777  108857 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (598.527µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33414]
I0919 02:11:09.678785  108857 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (610.272µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33412]
I0919 02:11:09.679324  108857 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=52109 labels= fields= timeout=9m13s
I0919 02:11:09.679471  108857 get.go:251] Starting watch for /api/v1/pods, rv=52056 labels= fields= timeout=6m53s
I0919 02:11:09.679639  108857 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=52150 labels= fields= timeout=5m46s
I0919 02:11:09.777077  108857 shared_informer.go:227] caches populated
I0919 02:11:09.777103  108857 shared_informer.go:227] caches populated
I0919 02:11:09.777108  108857 shared_informer.go:227] caches populated
I0919 02:11:09.777113  108857 shared_informer.go:227] caches populated
I0919 02:11:09.777121  108857 shared_informer.go:227] caches populated
I0919 02:11:09.777125  108857 shared_informer.go:227] caches populated
I0919 02:11:09.777129  108857 shared_informer.go:227] caches populated
I0919 02:11:09.777133  108857 shared_informer.go:227] caches populated
I0919 02:11:09.782145  108857 httplog.go:90] POST /api/v1/nodes: (4.385555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:09.782877  108857 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0919 02:11:09.784445  108857 shared_informer.go:227] caches populated
I0919 02:11:09.784477  108857 shared_informer.go:204] Caches are synced for taint 
I0919 02:11:09.784523  108857 node_lifecycle_controller.go:715] Controller observed a new Node: "node-0"
I0919 02:11:09.784534  108857 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0919 02:11:09.784593  108857 node_lifecycle_controller.go:1253] Initializing eviction metric for zone: region1:�:zone1
W0919 02:11:09.784636  108857 node_lifecycle_controller.go:949] Missing timestamp for Node node-0. Assuming now as a timestamp.
I0919 02:11:09.784679  108857 node_lifecycle_controller.go:1153] Controller detected that zone region1:�:zone1 is now in state Normal.
I0919 02:11:09.785038  108857 taint_manager.go:186] Starting NoExecuteTaintManager
I0919 02:11:09.785122  108857 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"2c66efd2-7d7c-49d3-84ab-552db0d914be", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0919 02:11:09.785166  108857 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 02:11:09.785177  108857 taint_manager.go:438] Updating known taints on node node-0: []
I0919 02:11:09.785301  108857 httplog.go:90] POST /api/v1/nodes: (2.508275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:09.785483  108857 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0919 02:11:09.785519  108857 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 02:11:09.785532  108857 taint_manager.go:438] Updating known taints on node node-1: []
I0919 02:11:09.790152  108857 httplog.go:90] POST /api/v1/namespaces/default/events: (4.570457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:09.795173  108857 httplog.go:90] POST /api/v1/nodes: (9.208867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:09.795469  108857 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0919 02:11:09.795537  108857 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 02:11:09.795550  108857 taint_manager.go:438] Updating known taints on node node-2: []
I0919 02:11:09.797808  108857 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/pods: (2.072408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:09.797962  108857 scheduling_queue.go:830] About to try and schedule pod taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/testpod-0
I0919 02:11:09.797989  108857 scheduler.go:530] Attempting to schedule pod: taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/testpod-0
I0919 02:11:09.798153  108857 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f", Name:"testpod-0"}
I0919 02:11:09.798275  108857 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/testpod-0", node "node-0"
I0919 02:11:09.798305  108857 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/testpod-0", node "node-0": all PVCs bound and nothing to do
I0919 02:11:09.798340  108857 factory.go:606] Attempting to bind testpod-0 to node-0
I0919 02:11:09.802932  108857 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/pods/testpod-0/binding: (2.263618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:09.803151  108857 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f", Name:"testpod-0"}
I0919 02:11:09.803263  108857 scheduler.go:662] pod taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/testpod-0 is bound successfully on node "node-0", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0919 02:11:09.805413  108857 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/events: (1.84645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:09.900183  108857 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/pods/testpod-0: (1.351895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:09.902509  108857 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/pods/testpod-0: (1.647374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:09.904163  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.243669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:09.906641  108857 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.014488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.010612  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.513098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.035803  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.035925  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.036027  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.036035  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.038001  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.038009  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.040358  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.109773  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.289408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.211203  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.635122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.310061  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.523046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.410332  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.766156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.475105  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.475930  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.475943  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.476461  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.476472  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.478466  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.511155  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.838978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.610419  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.515018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.679233  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:10.710032  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.271249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.809899  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.39004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:10.909838  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.268703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.009981  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.520327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.035987  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.036080  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.036226  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.036271  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.038166  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.038171  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.040526  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.109883  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.371592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.210436  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.905195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.309779  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.263701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.410778  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.259759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.475282  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.476077  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.476123  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.476593  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.476627  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.478698  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.510255  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.694639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.611880  108857 httplog.go:90] GET /api/v1/nodes/node-0: (3.332949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.679452  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:11.710331  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.740148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.810302  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.779902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:11.910266  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.779506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.009921  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.405718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.036205  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.036248  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.036405  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.036416  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.038328  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.038348  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.040683  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.110031  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.479396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.210160  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.513919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.309875  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.315238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.409931  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.44675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.475472  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.476225  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.476232  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.477061  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.477193  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.478857  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.509935  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.421545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.609691  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.164547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.679648  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:12.709992  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.454507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.809793  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.302716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.909804  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.271695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:12.940449  108857 httplog.go:90] GET /api/v1/namespaces/default: (1.309171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44112]
I0919 02:11:12.941922  108857 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.083096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44112]
I0919 02:11:12.943455  108857 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.124584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44112]
I0919 02:11:13.010345  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.851473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:13.036407  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.036447  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.036601  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.036679  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.038434  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.038508  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.040849  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.109876  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.299696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:13.210336  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.738455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:13.310008  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.459787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:13.410094  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.538465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:13.475642  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.476419  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.476620  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.477220  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.477434  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.479121  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.510117  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.589349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:13.612803  108857 httplog.go:90] GET /api/v1/nodes/node-0: (4.185831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:13.679840  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:13.709818  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.299379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:13.810323  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.432084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:13.909856  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.350079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.010898  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.313955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.036594  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.036594  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.036741  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.036742  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.038613  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.038733  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.041000  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.109992  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.505081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.210054  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.500606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.310136  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.575623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.410049  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.436242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.475815  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.476585  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.476791  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.477398  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.477574  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.479313  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.510149  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.590279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.610338  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.811837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.680004  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:14.710357  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.772069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.784851  108857 node_lifecycle_controller.go:715] Controller observed a new Node: "node-1"
I0919 02:11:14.784886  108857 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0919 02:11:14.784952  108857 node_lifecycle_controller.go:715] Controller observed a new Node: "node-2"
I0919 02:11:14.784961  108857 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0919 02:11:14.785031  108857 node_lifecycle_controller.go:949] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0919 02:11:14.785079  108857 node_lifecycle_controller.go:949] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0919 02:11:14.785109  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 5.000465118s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 02:11:14.785210  108857 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"3fb7bc6b-c09f-4cb5-8575-5936989841f9", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0919 02:11:14.785226  108857 node_lifecycle_controller.go:1021] Condition MemoryPressure of node node-0 was never updated by kubelet
I0919 02:11:14.785240  108857 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"d7ec749a-240e-4cb0-a71f-c7d64852ed6e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0919 02:11:14.785245  108857 node_lifecycle_controller.go:1021] Condition DiskPressure of node node-0 was never updated by kubelet
I0919 02:11:14.785259  108857 node_lifecycle_controller.go:1021] Condition PIDPressure of node node-0 was never updated by kubelet
I0919 02:11:14.787498  108857 httplog.go:90] POST /api/v1/namespaces/default/events: (1.795808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.789492  108857 httplog.go:90] POST /api/v1/namespaces/default/events: (1.564785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:14.790648  108857 httplog.go:90] PUT /api/v1/nodes/node-0/status: (4.949018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:14.791146  108857 node_lifecycle_controller.go:779] Node node-0 is NotReady as of 2019-09-19 02:11:14.791062193 +0000 UTC m=+240.729417340. Adding it to the Taint queue.
I0919 02:11:14.795489  108857 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (393.866µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:14.798461  108857 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.943962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:14.798807  108857 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-19 02:11:14.794787437 +0000 UTC m=+240.733142636,}] Taint to Node node-0
I0919 02:11:14.798854  108857 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 02:11:14.799050  108857 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/unreachable  NoExecute 2019-09-19 02:11:14 +0000 UTC}]
I0919 02:11:14.799110  108857 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/testpod-0 at 2019-09-19 02:11:14.799093466 +0000 UTC m=+240.737448635 to be fired at 2019-09-19 02:16:14.799093466 +0000 UTC m=+540.737448635
I0919 02:11:14.799037  108857 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 02:11:14.809828  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.262633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:14.910573  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.963918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.010556  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.020764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.038410  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.038673  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.038682  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.038726  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.038778  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.038893  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.041145  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.110571  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.986123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.210609  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.986945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.310502  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.758916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.411264  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.434357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.475977  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.476766  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.476918  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.477548  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.477718  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.479439  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.510011  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.516066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.610418  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.805435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.680173  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:15.710083  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.593671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.810221  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.668991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:15.910106  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.495054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.010811  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.102799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.038567  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.038861  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.038887  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.038959  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.038866  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.039098  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.041357  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.109921  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.423112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.210154  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.67655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.313180  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.868013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.410059  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.472171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.476135  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.476925  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.477173  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.477824  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.477843  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.480431  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.511782  108857 httplog.go:90] GET /api/v1/nodes/node-0: (3.143876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.610346  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.779666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.680673  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:16.710493  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.00603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.810217  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.633396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:16.909952  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.444065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.009962  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.478908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.038746  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.039057  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.039085  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.039100  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.039101  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.039213  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.041540  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.109893  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.329299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.209985  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.458247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.310258  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.702077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.409997  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.340596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.476309  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.477107  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.477315  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.477957  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.478044  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.480582  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.509984  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.361295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.610393  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.621434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.680840  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:17.710253  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.738546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.809865  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.408896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:17.910754  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.160087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.010076  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.470952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.038918  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.039237  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.039239  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.039282  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.039324  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.039381  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.041765  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.110084  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.495795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.210070  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.511859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.310031  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.503048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.410260  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.586221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.476486  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.477677  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.477799  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.478074  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.478164  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.480770  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.510206  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.694577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.610320  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.791471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.681032  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:18.710140  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.64919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.810188  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.558163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:18.909787  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.336557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.009992  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.428852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.039155  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.039353  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.039407  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.039421  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.039430  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.039528  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.041933  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.110041  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.573997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.210411  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.640432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.283932  108857 httplog.go:90] GET /api/v1/namespaces/default: (2.267356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.285923  108857 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.415232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.287746  108857 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.139708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.310015  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.510672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.410310  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.719563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.476660  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.477910  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.477952  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.478250  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.478304  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.480965  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.510259  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.697068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.610696  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.064907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.681241  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:19.710471  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.864707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.791537  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 10.006877868s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:19.791618  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 10.00697314s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:19.791645  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 10.007001232s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:19.791663  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 10.007019961s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:19.791737  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 5.006693895s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 02:11:19.791761  108857 node_lifecycle_controller.go:1021] Condition MemoryPressure of node node-1 was never updated by kubelet
I0919 02:11:19.791771  108857 node_lifecycle_controller.go:1021] Condition DiskPressure of node node-1 was never updated by kubelet
I0919 02:11:19.791776  108857 node_lifecycle_controller.go:1021] Condition PIDPressure of node node-1 was never updated by kubelet
I0919 02:11:19.794870  108857 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.665307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.795327  108857 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0919 02:11:19.795352  108857 controller_utils.go:124] Update ready status of pods on node [node-1]
I0919 02:11:19.795470  108857 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"3fb7bc6b-c09f-4cb5-8575-5936989841f9", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0919 02:11:19.797548  108857 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.839682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:19.797725  108857 httplog.go:90] POST /api/v1/namespaces/default/events: (2.042964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.797945  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 5.012855146s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 02:11:19.797980  108857 node_lifecycle_controller.go:1021] Condition MemoryPressure of node node-2 was never updated by kubelet
I0919 02:11:19.797990  108857 node_lifecycle_controller.go:1021] Condition DiskPressure of node node-2 was never updated by kubelet
I0919 02:11:19.797996  108857 node_lifecycle_controller.go:1021] Condition PIDPressure of node node-2 was never updated by kubelet
I0919 02:11:19.800418  108857 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.13558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.800812  108857 controller_utils.go:180] Recording status change NodeNotReady event message for node node-2
I0919 02:11:19.800845  108857 controller_utils.go:124] Update ready status of pods on node [node-2]
I0919 02:11:19.800951  108857 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"d7ec749a-240e-4cb0-a71f-c7d64852ed6e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-2 status is now: NodeNotReady
I0919 02:11:19.802286  108857 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-2: (1.164642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.802610  108857 httplog.go:90] POST /api/v1/namespaces/default/events: (1.479403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:19.802702  108857 node_lifecycle_controller.go:1103] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0919 02:11:19.803326  108857 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (417.648µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:19.806103  108857 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.00254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:19.806700  108857 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 02:11:19.806721  108857 taint_manager.go:438] Updating known taints on node node-0: []
I0919 02:11:19.806734  108857 taint_manager.go:459] All taints were removed from the Node node-0. Cancelling all evictions...
I0919 02:11:19.806743  108857 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/testpod-0 at 2019-09-19 02:11:19.806740963 +0000 UTC m=+245.745096096
I0919 02:11:19.806809  108857 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/testpod-0
I0919 02:11:19.808601  108857 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/events: (1.492089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33492]
I0919 02:11:19.809393  108857 httplog.go:90] GET /api/v1/nodes/node-0: (974.788µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:19.910147  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.593171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.010476  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.843608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.039345  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.039566  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.039652  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.039659  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.039669  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.039692  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.042093  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.110247  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.706417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.210035  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.515632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.310066  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.530295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.410425  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.820254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.477051  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.479471  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.479576  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.479653  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.479654  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.481130  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.509824  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.331723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.610190  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.67595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.681426  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:20.710095  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.601568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.810103  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.528364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:20.910292  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.690975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.010282  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.706597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.039544  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.039725  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.039744  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.039810  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.039828  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.039849  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.042262  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.109928  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.39958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.210176  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.581517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.310062  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.514643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.410343  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.711427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.477196  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.479670  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.479701  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.479837  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.480068  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.481323  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.510295  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.713182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.611494  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.337925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.681610  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:21.710054  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.516377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.810163  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.501675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:21.910162  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.678836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.010282  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.645569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.039718  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.039946  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.039959  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.039962  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.039994  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.040092  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.042341  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.110668  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.824237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.210122  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.607646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.310166  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.655948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.410101  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.594151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.477394  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.479816  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.480139  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.480178  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.480257  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.481611  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.510680  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.092991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.610079  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.55404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.681774  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:22.710396  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.792784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.810306  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.710064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.910403  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.711373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:22.940603  108857 httplog.go:90] GET /api/v1/namespaces/default: (1.353681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44112]
I0919 02:11:22.942255  108857 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.179363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44112]
I0919 02:11:22.943917  108857 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.03119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44112]
I0919 02:11:23.010826  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.286969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:23.039925  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.040147  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.040142  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.040159  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.040165  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.040157  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.042625  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.110631  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.972264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:23.211670  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.716252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:23.310082  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.56465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:23.410451  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.808808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:23.477610  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.480047  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.480282  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.480300  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.480561  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.481795  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.510431  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.713782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:23.609949  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.356831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:23.681956  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:23.710502  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.625854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:23.810793  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.223558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:23.909820  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.277821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.010706  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.20558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.040325  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.040407  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.040574  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.040697  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.040777  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.040791  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.042992  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.110663  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.172316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.212641  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.869034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.310611  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.470885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.410579  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.97128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.477790  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.480269  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.480432  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.480653  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.480701  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.481964  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.510511  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.823551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.610207  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.55544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.682163  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:24.710163  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.448426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.807901  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 15.023244999s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:24.807965  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 15.023320588s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:24.807991  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 15.023347415s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:24.808015  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 15.02337176s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:24.808076  108857 node_lifecycle_controller.go:805] Node node-0 is unresponsive as of 2019-09-19 02:11:24.808058363 +0000 UTC m=+250.746413508. Adding it to the Taint queue.
I0919 02:11:24.808109  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 10.023064968s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:24.808128  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 10.023083636s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:24.808144  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 10.023098878s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:24.808160  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 10.02311574s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:24.808192  108857 node_lifecycle_controller.go:805] Node node-1 is unresponsive as of 2019-09-19 02:11:24.808180425 +0000 UTC m=+250.746535577. Adding it to the Taint queue.
I0919 02:11:24.808223  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 10.023131652s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:24.808240  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 10.023153639s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:24.808254  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 10.023167445s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:24.808270  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 10.023183547s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:24.808301  108857 node_lifecycle_controller.go:805] Node node-2 is unresponsive as of 2019-09-19 02:11:24.808289138 +0000 UTC m=+250.746644281. Adding it to the Taint queue.
I0919 02:11:24.810475  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.020016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:24.910282  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.726582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.011138  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.497414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.040527  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.040572  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.040748  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.040908  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.040941  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.041020  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.043332  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.110392  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.754812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.209898  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.389923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.310102  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.534869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.410222  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.735044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.477963  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.480456  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.480523  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.480785  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.480834  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.482115  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.509967  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.50897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.610302  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.692198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.682414  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:25.710486  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.891719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.810889  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.321133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:25.910611  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.871088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.010257  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.678273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.040671  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.040680  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.040868  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.041075  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.041273  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.041091  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.043524  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.110920  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.178223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.210038  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.459576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.309971  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.397102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.409978  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.467043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.478138  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.480667  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.480674  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.480959  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.480990  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.482280  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.511394  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.491067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.610330  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.729501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.682582  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:26.710427  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.87226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.812284  108857 httplog.go:90] GET /api/v1/nodes/node-0: (3.623608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:26.910160  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.571072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.010624  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.988216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.040861  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.040861  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.041059  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.041250  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.041445  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.041471  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.043612  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.110290  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.734746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.210475  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.89924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.310735  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.190244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.410445  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.841298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.478260  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.480872  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.481005  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.481118  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.481119  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.482444  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.509893  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.37124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.610176  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.491802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.682788  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:27.710600  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.898694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.810447  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.862285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:27.910625  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.958788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.010402  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.853318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.041166  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.041440  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.041572  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.041580  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.041601  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.041607  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.043948  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.111121  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.545536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.211269  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.746829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.310256  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.67406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.410703  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.003897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.478417  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.481105  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.481279  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.481458  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.481617  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.482599  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.510348  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.729047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.610477  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.823883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.683016  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:28.710244  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.645546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.810233  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.649003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:28.910245  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.617209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.010556  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.913108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.041344  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.041611  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.041734  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.041743  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.041826  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.041826  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.044165  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.110274  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.737014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.210590  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.952604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.282981  108857 httplog.go:90] GET /api/v1/namespaces/default: (1.284941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.285064  108857 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.513997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.286854  108857 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.312873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.310410  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.831129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.410139  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.569691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.478565  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.481397  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.481426  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.481633  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.481868  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.482750  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.510061  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.524977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.610119  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.549373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.683225  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:29.710064  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.435431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.808615  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 20.023961101s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:29.808668  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 20.024025469s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:29.808693  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 20.024050318s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:29.808710  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 20.024068207s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:29.808881  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 15.023836714s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:29.808906  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 15.023863064s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:29.808917  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 15.023873957s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:29.808927  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 15.023884174s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:29.808990  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 15.023904169s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:29.809039  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 15.023953198s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:29.809051  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 15.023965914s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:29.809799  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 15.024706814s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:29.810231  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.660108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:29.909945  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.397071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.010027  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.49618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.041560  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.041824  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.042063  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.042211  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.042211  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.042282  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.044503  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.110152  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.613694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.210146  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.578844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.310139  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.547109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.410623  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.02197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.478686  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.481570  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.481582  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.482044  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.482282  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.482896  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.510336  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.802728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.610209  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.642932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.683693  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:30.710592  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.964387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.810250  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.621828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:30.910267  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.594935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.009859  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.325282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.042073  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.042163  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.042217  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.042439  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.042991  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.043022  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.044665  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.110175  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.626479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.210199  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.676171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.310720  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.193957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.410105  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.457916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.478854  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.481832  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.481842  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.482218  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.482415  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.483054  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.510385  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.75221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.610758  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.152452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.683870  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:31.710929  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.343358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.810317  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.777394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:31.910328  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.862705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.010252  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.674715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.042349  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.042565  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.042598  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.042724  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.043148  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.043164  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.044846  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.110336  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.640901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.210329  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.744022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.310247  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.664656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.410998  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.373024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.479007  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.482045  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.482051  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.482399  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.482582  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.483199  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.510528  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.876079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.610269  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.692049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.684052  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:32.710580  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.00603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.810849  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.248046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.910529  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.889991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:32.940870  108857 httplog.go:90] GET /api/v1/namespaces/default: (1.501772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44112]
I0919 02:11:32.943054  108857 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.347737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44112]
I0919 02:11:32.944836  108857 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.345428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44112]
I0919 02:11:33.010505  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.869218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:33.042584  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.042722  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.042805  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.042818  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.043331  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.043349  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.045135  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.110404  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.847305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:33.210352  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.6891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:33.310658  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.05054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:33.410494  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.88549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:33.479203  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.482202  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.482205  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.482559  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.482691  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.483342  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.510574  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.009999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:33.610890  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.188239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:33.684231  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:33.710277  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.764839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:33.810754  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.182287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:33.910421  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.850492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.010582  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.003026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.042789  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.042831  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.043034  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.043102  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.043485  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.043489  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.045291  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.110483  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.863184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.210997  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.364094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.310620  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.985973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.410569  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.914388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.479415  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.482383  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.482711  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.482845  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.483133  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.483488  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.510067  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.530495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.610078  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.407071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.684421  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:34.710006  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.504963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.810300  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.700097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:34.810291  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 25.025636561s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:34.810352  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 25.02570712s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:34.810418  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 25.025773543s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:34.810439  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 25.025794874s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:34.810513  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 20.025467991s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:34.810548  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 20.025502843s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:34.810569  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 20.025518537s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:34.810594  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 20.025550025s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:34.810695  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 20.025606938s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:34.810718  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 20.025632491s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:34.810734  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 20.025647902s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:34.810748  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 20.025662205s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:34.910109  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.566912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.010095  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.546989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.043041  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.043105  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.043248  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.043257  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.043611  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.043621  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.045472  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.109844  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.353118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.210243  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.714805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.310170  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.629713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.410643  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.998045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.479675  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.482745  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.482842  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.483009  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.483287  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.483590  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.510046  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.561242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.610390  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.826476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.684582  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:35.710883  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.185773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.810408  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.831318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:35.910211  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.576364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.010634  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.059986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.043259  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.043259  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.043331  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.043416  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.043801  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.043808  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.045636  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.110083  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.590306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.210304  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.656803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.310185  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.551451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.410813  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.137461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.479846  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.482946  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.483131  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.483191  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.483437  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.483757  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.510174  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.632855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.610521  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.828411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.684780  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:36.711175  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.694992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.810736  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.225566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:36.910522  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.920237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.010607  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.022951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.043418  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.043482  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.043517  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.043532  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.043942  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.043955  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.045805  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.110665  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.124381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.211252  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.685038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.311149  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.542233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.410355  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.733184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.479982  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.483136  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.483322  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.483476  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.483634  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.483909  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.510684  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.198845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.610794  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.206676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.685032  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:37.711001  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.414536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.810287  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.731271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:37.912339  108857 httplog.go:90] GET /api/v1/nodes/node-0: (3.739533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.010425  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.943056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.043624  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.043782  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.043805  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.043807  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.044084  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.044239  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.045966  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.109968  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.464406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.210760  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.077268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.310180  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.583477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.410449  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.921145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.480206  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.483334  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.483488  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.483774  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.483796  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.484058  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.510837  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.138882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.610757  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.052435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.685242  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:38.717215  108857 httplog.go:90] GET /api/v1/nodes/node-0: (8.634196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.810938  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.415584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:38.912675  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.711062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.010581  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.11566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.043986  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.044101  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.044112  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.044136  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.044280  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.044452  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.046170  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.110257  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.733011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.210735  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.9739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.285684  108857 httplog.go:90] GET /api/v1/namespaces/default: (3.721744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.287431  108857 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.28852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.289355  108857 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.490747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.310631  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.145526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.410394  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.809927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.480422  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.483689  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.484045  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.484281  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.484431  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.484457  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.510015  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.545183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.610705  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.294426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.685443  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:39.711099  108857 httplog.go:90] GET /api/v1/nodes/node-0: (2.255791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.810381  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.804103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.810969  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 30.026319286s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:39.811008  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 30.026365297s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:39.811022  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 30.026379664s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:39.811033  108857 node_lifecycle_controller.go:1031] node node-0 hasn't been updated for 30.02639142s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:14 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:39.811086  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 25.026043079s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:39.811102  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 25.026059181s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:39.811113  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 25.02607019s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:39.811135  108857 node_lifecycle_controller.go:1031] node node-1 hasn't been updated for 25.026091765s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:39.811163  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 25.026078295s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 02:11:39.811185  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 25.02610008s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:39.811194  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 25.026109672s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:39.811204  108857 node_lifecycle_controller.go:1031] node node-2 hasn't been updated for 25.026119323s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 02:11:09 +0000 UTC,LastTransitionTime:2019-09-19 02:11:19 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 02:11:39.910174  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.514237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.912043  108857 httplog.go:90] GET /api/v1/nodes/node-0: (1.183387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
Sep 19 02:11:39.912: INFO: Waiting up to 15s for pod "testpod-0" in namespace "taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f" to be "updated with tolerationSeconds of 200"
I0919 02:11:39.914082  108857 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/pods/testpod-0: (982.641µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
Sep 19 02:11:39.914: INFO: Pod "testpod-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.541921ms
Sep 19 02:11:39.914: INFO: Pod "testpod-0" satisfied condition "updated with tolerationSeconds of 200"
I0919 02:11:39.919483  108857 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/pods/testpod-0: (4.843963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.920001  108857 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f", Name:"testpod-0"}
I0919 02:11:39.921978  108857 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/pods/testpod-0: (1.04387ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.926766  108857 node_tree.go:113] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I0919 02:11:39.926834  108857 taint_manager.go:422] Noticed node deletion: "node-0"
I0919 02:11:39.929734  108857 taint_manager.go:422] Noticed node deletion: "node-1"
I0919 02:11:39.929688  108857 node_tree.go:113] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I0919 02:11:39.932025  108857 httplog.go:90] DELETE /api/v1/nodes: (9.441324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33494]
I0919 02:11:39.932098  108857 node_tree.go:113] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I0919 02:11:39.932156  108857 taint_manager.go:422] Noticed node deletion: "node-2"
I0919 02:11:40.044217  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.044249  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.044255  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.044301  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.044460  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.044598  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.046410  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.480579  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.483892  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.484200  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.484448  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.484563  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.484586  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 02:11:40.685620  108857 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds (35.18s)
        taint_test.go:770: Failed to taint node in test 0 <node-0>, err: timed out waiting for the condition

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-020103.xml

Find taint-based-evictions40f2e63f-be06-4d46-8103-d34e88ca977f/testpod-0 mentions in log files


Show 2866 Passed Tests

Show 4 Skipped Tests