This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 8 failed / 2860 succeeded
Started2019-09-19 08:48
Elapsed27m25s
Revision
Buildergke-prow-ssd-pool-1a225945-txmz
Refs master:b8866250
82703:7c0cda5d
pod2ae927ad-daba-11e9-b7bb-32cecfce85d6
infra-commitfe9f237a8
pod2ae927ad-daba-11e9-b7bb-32cecfce85d6
repok8s.io/kubernetes
repo-commit5198c3044f1853ce81577d0a0b3fd476f74c09a1
repos{u'k8s.io/kubernetes': u'master:b88662505d288297750becf968bf307dacf872fa,82703:7c0cda5d582c928c5b3f14429ca73d3e117ce6a3'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0919 09:10:02.651061  108216 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 09:10:02.651077  108216 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 09:10:02.651089  108216 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 09:10:02.651099  108216 master.go:259] Using reconciler: 
I0919 09:10:02.653302  108216 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.653484  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.653558  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.654149  108216 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 09:10:02.654187  108216 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.654241  108216 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 09:10:02.654471  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.654491  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.654972  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.655195  108216 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:10:02.655225  108216 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.655322  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.655337  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.655398  108216 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:10:02.656414  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.656533  108216 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 09:10:02.656573  108216 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.656667  108216 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 09:10:02.656707  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.656732  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.657180  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.657595  108216 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 09:10:02.657661  108216 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 09:10:02.657798  108216 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.657925  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.657940  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.658905  108216 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 09:10:02.658975  108216 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 09:10:02.658911  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.659171  108216 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.659290  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.659303  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.660351  108216 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 09:10:02.660522  108216 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.660607  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.660700  108216 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 09:10:02.661131  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.661153  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.662251  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.662279  108216 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 09:10:02.662326  108216 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 09:10:02.662475  108216 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.662602  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.662623  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.663331  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.663553  108216 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 09:10:02.663614  108216 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 09:10:02.663764  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.663896  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.663917  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.664691  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.665190  108216 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 09:10:02.665364  108216 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.665469  108216 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 09:10:02.665490  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.665510  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.666339  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.666790  108216 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 09:10:02.666865  108216 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 09:10:02.666970  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.667082  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.667177  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.667894  108216 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 09:10:02.668020  108216 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 09:10:02.668099  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.668207  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.668227  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.668810  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.669840  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.669928  108216 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 09:10:02.670031  108216 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 09:10:02.670095  108216 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.670204  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.670221  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.670628  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.671023  108216 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 09:10:02.671181  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.671315  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.671336  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.671409  108216 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 09:10:02.671999  108216 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 09:10:02.672048  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.672202  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.672221  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.672289  108216 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 09:10:02.672779  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.673562  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.673580  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.674305  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.674413  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.674445  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.674969  108216 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 09:10:02.674984  108216 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 09:10:02.675033  108216 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 09:10:02.675427  108216 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.675702  108216 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.675728  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.676421  108216 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.677117  108216 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.677759  108216 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.678420  108216 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.678827  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.678954  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.679158  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.679755  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.680452  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.680748  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.681436  108216 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.682093  108216 watch_cache.go:405] Replace watchCache (rev: 30250) 
I0919 09:10:02.682106  108216 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.682744  108216 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.683013  108216 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.683625  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.683835  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.683951  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.684064  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.687692  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.688070  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.692447  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.693748  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.694139  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.695136  108216 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.695904  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.696210  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.696623  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.698149  108216 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.698554  108216 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.699338  108216 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.700288  108216 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.700923  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.701631  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.705195  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.705453  108216 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 09:10:02.705543  108216 master.go:461] Enabling API group "authentication.k8s.io".
I0919 09:10:02.705615  108216 master.go:461] Enabling API group "authorization.k8s.io".
I0919 09:10:02.705915  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.706201  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.706348  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.708997  108216 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:10:02.709034  108216 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:10:02.709184  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.709401  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.709433  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.709918  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.710087  108216 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:10:02.710279  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.710517  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.711032  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.710317  108216 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:10:02.711887  108216 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:10:02.711911  108216 master.go:461] Enabling API group "autoscaling".
I0919 09:10:02.712081  108216 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.712205  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.712237  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.712324  108216 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:10:02.712355  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.713156  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.713490  108216 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 09:10:02.713717  108216 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 09:10:02.713637  108216 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.713853  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.713885  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.716908  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.717746  108216 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 09:10:02.717777  108216 master.go:461] Enabling API group "batch".
I0919 09:10:02.717833  108216 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 09:10:02.717967  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.718100  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.718120  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.718701  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.719669  108216 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 09:10:02.719694  108216 master.go:461] Enabling API group "certificates.k8s.io".
I0919 09:10:02.719864  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.719980  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.719996  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.720138  108216 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 09:10:02.721343  108216 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:10:02.721486  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.721610  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.721628  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.721662  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.721701  108216 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:10:02.722391  108216 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:10:02.722411  108216 master.go:461] Enabling API group "coordination.k8s.io".
I0919 09:10:02.722427  108216 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 09:10:02.722660  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.722619  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.722824  108216 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:10:02.722870  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.722891  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.724002  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.724020  108216 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:10:02.724045  108216 master.go:461] Enabling API group "extensions".
I0919 09:10:02.724093  108216 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:10:02.724199  108216 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.724298  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.724315  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.726793  108216 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 09:10:02.726878  108216 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 09:10:02.726992  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.727154  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.727180  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.727579  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.727953  108216 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:10:02.727975  108216 master.go:461] Enabling API group "networking.k8s.io".
I0919 09:10:02.727988  108216 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:10:02.727996  108216 watch_cache.go:405] Replace watchCache (rev: 30251) 
I0919 09:10:02.728006  108216 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.728131  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.728153  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.729493  108216 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 09:10:02.729515  108216 master.go:461] Enabling API group "node.k8s.io".
I0919 09:10:02.729674  108216 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 09:10:02.729713  108216 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.729840  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.729858  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.730316  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.731102  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.731405  108216 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 09:10:02.731436  108216 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 09:10:02.731685  108216 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.731848  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.731883  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.732795  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.733093  108216 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 09:10:02.733114  108216 master.go:461] Enabling API group "policy".
I0919 09:10:02.733146  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.733167  108216 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 09:10:02.733335  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.733374  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.734359  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.736032  108216 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:10:02.736215  108216 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:10:02.736599  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.736791  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.736810  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.737369  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.738274  108216 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:10:02.738316  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.738432  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.738451  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.738559  108216 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:10:02.742715  108216 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:10:02.742884  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.742966  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.743024  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.743039  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.743163  108216 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:10:02.744210  108216 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:10:02.744248  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.744304  108216 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:10:02.744392  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.745064  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.745161  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.745781  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.746209  108216 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:10:02.746573  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.746823  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.746883  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.746315  108216 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:10:02.747946  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.748078  108216 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:10:02.748108  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.748258  108216 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:10:02.748260  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.748382  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.749497  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.750393  108216 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:10:02.750499  108216 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:10:02.750547  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.750947  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.750971  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.751705  108216 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:10:02.751734  108216 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 09:10:02.752551  108216 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:10:02.753959  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.754719  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.754896  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.754922  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.755388  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.757146  108216 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:10:02.757324  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.757452  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.757487  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.757575  108216 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:10:02.758589  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.758629  108216 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:10:02.758665  108216 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 09:10:02.758711  108216 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:10:02.758807  108216 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 09:10:02.758961  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.759115  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.759135  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.760022  108216 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:10:02.760096  108216 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:10:02.760191  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.760325  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.760340  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.761159  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.761490  108216 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:10:02.761517  108216 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.761535  108216 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:10:02.761696  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.761715  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.761850  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.762896  108216 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 09:10:02.762927  108216 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.762961  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.763055  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.763075  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.763162  108216 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 09:10:02.764578  108216 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 09:10:02.764797  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.764815  108216 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 09:10:02.764919  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.764935  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.765587  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.765770  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.766058  108216 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:10:02.766202  108216 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:10:02.766964  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.767274  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.767371  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.768875  108216 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:10:02.768933  108216 master.go:461] Enabling API group "storage.k8s.io".
I0919 09:10:02.769049  108216 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:10:02.769115  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.769371  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.769391  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.769889  108216 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 09:10:02.769980  108216 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 09:10:02.770104  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.770236  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.770257  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.771065  108216 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 09:10:02.771130  108216 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 09:10:02.771244  108216 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.771340  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.771360  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.773745  108216 watch_cache.go:405] Replace watchCache (rev: 30253) 
I0919 09:10:02.773753  108216 watch_cache.go:405] Replace watchCache (rev: 30253) 
I0919 09:10:02.773977  108216 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 09:10:02.773997  108216 watch_cache.go:405] Replace watchCache (rev: 30252) 
I0919 09:10:02.774149  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.774270  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.774293  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.774579  108216 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 09:10:02.774943  108216 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 09:10:02.774996  108216 watch_cache.go:405] Replace watchCache (rev: 30253) 
I0919 09:10:02.775015  108216 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 09:10:02.775165  108216 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.775306  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.775345  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.775916  108216 watch_cache.go:405] Replace watchCache (rev: 30253) 
I0919 09:10:02.776407  108216 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 09:10:02.776434  108216 master.go:461] Enabling API group "apps".
I0919 09:10:02.776463  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.776566  108216 watch_cache.go:405] Replace watchCache (rev: 30253) 
I0919 09:10:02.776613  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.776814  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.776762  108216 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 09:10:02.777863  108216 watch_cache.go:405] Replace watchCache (rev: 30253) 
I0919 09:10:02.778172  108216 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:10:02.778354  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.778570  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.778690  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.778296  108216 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:10:02.779370  108216 watch_cache.go:405] Replace watchCache (rev: 30253) 
I0919 09:10:02.779750  108216 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:10:02.779889  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.780128  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.780228  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.780395  108216 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:10:02.781119  108216 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:10:02.781157  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.781259  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.781274  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.781353  108216 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:10:02.781446  108216 watch_cache.go:405] Replace watchCache (rev: 30254) 
I0919 09:10:02.782076  108216 watch_cache.go:405] Replace watchCache (rev: 30254) 
I0919 09:10:02.783251  108216 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:10:02.783346  108216 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:10:02.783381  108216 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 09:10:02.783418  108216 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.783709  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:02.783742  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:02.785135  108216 watch_cache.go:405] Replace watchCache (rev: 30254) 
I0919 09:10:02.785418  108216 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:10:02.785580  108216 master.go:461] Enabling API group "events.k8s.io".
I0919 09:10:02.785463  108216 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:10:02.785956  108216 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.786236  108216 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.786549  108216 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.786751  108216 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.786946  108216 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.787075  108216 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.787347  108216 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.787462  108216 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.787673  108216 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.787798  108216 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.789488  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.789735  108216 watch_cache.go:405] Replace watchCache (rev: 30254) 
I0919 09:10:02.789896  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.791008  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.791294  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.792398  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.793032  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.794239  108216 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.794595  108216 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.795595  108216 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.796014  108216 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:10:02.796149  108216 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 09:10:02.796997  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.797225  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.797581  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.798768  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.799736  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.800810  108216 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.801237  108216 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.802334  108216 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.803421  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.803722  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.804565  108216 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:10:02.804662  108216 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 09:10:02.805718  108216 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.806059  108216 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.806726  108216 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.807558  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.808262  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.809137  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.809868  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.810687  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.811258  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.812138  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.813052  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:10:02.813152  108216 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 09:10:02.813858  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.814615  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:10:02.814693  108216 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 09:10:02.815393  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.816094  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.816440  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.817161  108216 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.817714  108216 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.818386  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.819033  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:10:02.819193  108216 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 09:10:02.820299  108216 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.821244  108216 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.821529  108216 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.822265  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.822573  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.822888  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.823552  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.823812  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.824066  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.824868  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.825110  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.825409  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:10:02.825467  108216 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 09:10:02.825475  108216 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 09:10:02.826183  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.826810  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.827365  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.827979  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.829388  108216 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7a0e11b8-aef7-4c1f-8d20-211d92606671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:10:02.833013  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:02.833036  108216 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 09:10:02.833043  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:02.833051  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:02.833057  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:02.833065  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:02.833090  108216 httplog.go:90] GET /healthz: (168.723µs) 0 [Go-http-client/1.1 127.0.0.1:46804]
I0919 09:10:02.834269  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (988.196µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46804]
I0919 09:10:02.837249  108216 httplog.go:90] GET /api/v1/services: (959.202µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46804]
I0919 09:10:02.841161  108216 httplog.go:90] GET /api/v1/services: (1.244932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46804]
I0919 09:10:02.844037  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:02.844063  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:02.844074  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:02.844083  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:02.844092  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:02.844121  108216 httplog.go:90] GET /healthz: (183.24µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46804]
I0919 09:10:02.844126  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (874.157µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:02.845572  108216 httplog.go:90] GET /api/v1/services: (1.53118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:02.846137  108216 httplog.go:90] POST /api/v1/namespaces: (1.631679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46804]
I0919 09:10:02.852508  108216 httplog.go:90] GET /api/v1/services: (6.519511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:02.854082  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (7.637406ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46804]
I0919 09:10:02.855902  108216 httplog.go:90] POST /api/v1/namespaces: (1.258929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:02.857044  108216 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (796.454µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:02.858594  108216 httplog.go:90] POST /api/v1/namespaces: (1.161717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:02.938222  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:02.938427  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:02.938524  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:02.938591  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:02.938726  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:02.938913  108216 httplog.go:90] GET /healthz: (913.861µs) 0 [Go-http-client/1.1 127.0.0.1:46806]
I0919 09:10:02.944945  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:02.944976  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:02.944988  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:02.944997  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:02.945005  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:02.945033  108216 httplog.go:90] GET /healthz: (207.038µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.033789  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.033831  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.033843  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.033852  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.033860  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.033894  108216 httplog.go:90] GET /healthz: (251.783µs) 0 [Go-http-client/1.1 127.0.0.1:46806]
I0919 09:10:03.045103  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.045137  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.045149  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.045159  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.045167  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.045218  108216 httplog.go:90] GET /healthz: (250.292µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.133807  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.133836  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.133848  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.133857  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.133866  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.133894  108216 httplog.go:90] GET /healthz: (259.745µs) 0 [Go-http-client/1.1 127.0.0.1:46806]
I0919 09:10:03.144999  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.145028  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.145040  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.145050  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.145058  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.145092  108216 httplog.go:90] GET /healthz: (226.839µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.233801  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.233828  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.233841  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.233851  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.233859  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.233890  108216 httplog.go:90] GET /healthz: (256.225µs) 0 [Go-http-client/1.1 127.0.0.1:46806]
I0919 09:10:03.244939  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.244967  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.244979  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.244989  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.244997  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.245044  108216 httplog.go:90] GET /healthz: (220.847µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.333794  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.333825  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.333844  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.333853  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.333863  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.333921  108216 httplog.go:90] GET /healthz: (271.854µs) 0 [Go-http-client/1.1 127.0.0.1:46806]
I0919 09:10:03.345311  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.345346  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.345358  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.345380  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.345389  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.345427  108216 httplog.go:90] GET /healthz: (256.284µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.433862  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.433899  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.433927  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.433937  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.433944  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.433979  108216 httplog.go:90] GET /healthz: (270.528µs) 0 [Go-http-client/1.1 127.0.0.1:46806]
I0919 09:10:03.448026  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.448060  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.448071  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.448079  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.448092  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.448128  108216 httplog.go:90] GET /healthz: (233.276µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.533783  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.533817  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.533829  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.533838  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.533854  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.533884  108216 httplog.go:90] GET /healthz: (271.346µs) 0 [Go-http-client/1.1 127.0.0.1:46806]
I0919 09:10:03.545069  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.545102  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.545114  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.545132  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.545140  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.545169  108216 httplog.go:90] GET /healthz: (251.026µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.633810  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.633853  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.633890  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.633900  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.633908  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.633942  108216 httplog.go:90] GET /healthz: (331.408µs) 0 [Go-http-client/1.1 127.0.0.1:46806]
I0919 09:10:03.645238  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:10:03.645271  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.645283  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.645292  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.645315  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.645345  108216 httplog.go:90] GET /healthz: (317.195µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.650981  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:10:03.651071  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:10:03.734826  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.734874  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.734886  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.734894  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.734939  108216 httplog.go:90] GET /healthz: (1.293931ms) 0 [Go-http-client/1.1 127.0.0.1:46806]
I0919 09:10:03.745915  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.745942  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.745953  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.745961  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.745995  108216 httplog.go:90] GET /healthz: (1.134213ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.834692  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.575027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:03.834801  108216 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.816581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.835803  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.835826  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:10:03.835848  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:10:03.835863  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:10:03.835896  108216 httplog.go:90] GET /healthz: (1.112843ms) 0 [Go-http-client/1.1 127.0.0.1:46964]
I0919 09:10:03.836118  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.000114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46806]
I0919 09:10:03.836191  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.501892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.837770  108216 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.807351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:03.837814  108216 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.255594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.837975  108216 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 09:10:03.838896  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.254958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46964]
I0919 09:10:03.839000  108216 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (842.926µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.839904  108216 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.617617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:03.842182  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.770969ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46964]
I0919 09:10:03.843479  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (885.822µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46964]
I0919 09:10:03.843926  108216 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.245106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.844125  108216 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 09:10:03.844139  108216 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 09:10:03.845297  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (734.569µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.846141  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.846164  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:03.846192  108216 httplog.go:90] GET /healthz: (901.822µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:03.846311  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (633.965µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.847352  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (758.821µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.848479  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (746.016µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.849532  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (707.668µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.851431  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.482449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.851710  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 09:10:03.852669  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (762.992µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.854362  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.409254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.854669  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 09:10:03.855518  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (680.126µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.857179  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.258021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.857359  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 09:10:03.858231  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (694.206µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.859796  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.118611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.860077  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:10:03.861111  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (860.488µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.863103  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.470307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.863419  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 09:10:03.864493  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (768.633µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.866525  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.579854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.866936  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 09:10:03.868128  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.035226ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.870484  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.977429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.870678  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 09:10:03.871819  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (972.556µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.873692  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.336807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.873935  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 09:10:03.876609  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.449001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.878970  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.866402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.879221  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 09:10:03.880099  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (740.666µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.882218  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.805921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.882460  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 09:10:03.883400  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (803.054µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.885292  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.47913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.885633  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 09:10:03.886591  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (736.249µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.888913  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.866795ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.889250  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 09:10:03.890401  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (830.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.892030  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.036287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.892529  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 09:10:03.893522  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (692.713µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.895488  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.672389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.895732  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 09:10:03.896692  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (751.557µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.898414  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.409339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.898679  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 09:10:03.899583  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (712.793µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.901218  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.217573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.901460  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 09:10:03.902383  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (756.254µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.904236  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.313213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.904670  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 09:10:03.905605  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (796.133µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.907371  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.305788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.907752  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:10:03.908813  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (931.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.910464  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.349818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.910632  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 09:10:03.911835  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (758.782µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.913705  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.404975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.913975  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 09:10:03.914884  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (793.578µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.916458  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.297579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.916682  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 09:10:03.917757  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (711.339µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.919625  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.423698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.919912  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 09:10:03.920879  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (698.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.922735  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.314584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.922993  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 09:10:03.924085  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (817.156µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.926374  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.9976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.926585  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:10:03.927496  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (694.487µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.934337  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.934365  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:03.934401  108216 httplog.go:90] GET /healthz: (895.609µs) 0 [Go-http-client/1.1 127.0.0.1:46808]
I0919 09:10:03.934758  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.346908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.935163  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 09:10:03.936117  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (789.354µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.938587  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.880961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.938901  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:10:03.940066  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (983.308µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.942121  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.577486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.942353  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 09:10:03.943682  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.132517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.946848  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.66797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.947379  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:10:03.949136  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (968.219µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.949348  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:03.949380  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:03.949461  108216 httplog.go:90] GET /healthz: (4.682028ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:03.950894  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.229043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.951358  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:10:03.952349  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (770.324µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.954172  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.954489  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:10:03.956301  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (901.04µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.958718  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.755345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.959034  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:10:03.960062  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (771.486µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.962368  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.938002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.962579  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:10:03.963780  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (907.934µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.966433  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.879917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.966856  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:10:03.968005  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (908.075µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.969852  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.39933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.970045  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:10:03.970930  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (698.449µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.973060  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.973335  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:10:03.974401  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (732.204µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.976576  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.662738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.976936  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:10:03.977883  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (758.015µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.980187  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.856099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.980458  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:10:03.981512  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (723.71µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.984349  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.876219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.984755  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:10:03.985768  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (752.282µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.987495  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.319886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.987841  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:10:03.989953  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.870314ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.992236  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.603903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.992425  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:10:03.993246  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (651.11µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.994961  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.377292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.995321  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:10:03.996180  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (726.761µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.998555  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.907267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:03.998874  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:10:03.999787  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (702.987µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.001425  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.32116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.001724  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:10:04.002606  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (697.652µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.004087  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.094788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.004278  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:10:04.013985  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (910.98µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.015899  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.550049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.016221  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:10:04.017438  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (872.72µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.019254  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.412228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.019456  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:10:04.020384  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (745.043µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.022085  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.32379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.022269  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:10:04.023309  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (809.681µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.025166  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.46117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.025383  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:10:04.026744  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.15737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.028396  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.258511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.028592  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:10:04.029723  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (917.793µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.031904  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.786742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.032090  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:10:04.033946  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (779.531µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:04.034154  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.034184  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.034210  108216 httplog.go:90] GET /healthz: (692.712µs) 0 [Go-http-client/1.1 127.0.0.1:46808]
I0919 09:10:04.045772  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.045804  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.045933  108216 httplog.go:90] GET /healthz: (1.063144ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.055408  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.186814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.055703  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:10:04.074491  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.270315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.099769  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.517122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.099993  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:10:04.114439  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.176577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.134458  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.134488  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.134525  108216 httplog.go:90] GET /healthz: (1.050752ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:04.136201  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.683671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.136453  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:10:04.148091  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.148119  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.148159  108216 httplog.go:90] GET /healthz: (3.384825ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.154301  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.134223ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.175152  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.882595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.175363  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 09:10:04.194655  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.357971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.215122  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.883268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.215743  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 09:10:04.234503  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.040447ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.235556  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.235578  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.235607  108216 httplog.go:90] GET /healthz: (1.55281ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:04.245494  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.245515  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.245546  108216 httplog.go:90] GET /healthz: (733.115µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.255603  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.370632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.255859  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 09:10:04.274557  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.091086ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.295633  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.402123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.295870  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:10:04.314396  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.1506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.335225  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.335258  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.335295  108216 httplog.go:90] GET /healthz: (1.016043ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:04.335545  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.303221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.335783  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 09:10:04.346166  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.346197  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.346242  108216 httplog.go:90] GET /healthz: (1.284696ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.354441  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.236064ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.378833  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.535058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.379071  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:10:04.394468  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.218573ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.415157  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.960625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.415363  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 09:10:04.440023  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.440039  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.677054ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.440053  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.440096  108216 httplog.go:90] GET /healthz: (2.12339ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:04.446112  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.446136  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.446172  108216 httplog.go:90] GET /healthz: (1.267961ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.454969  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.723515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.455202  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:10:04.478250  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (5.084487ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.496626  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.228283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.496919  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:10:04.514876  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.643363ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.538249  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.538275  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.538282  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.049236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.538308  108216 httplog.go:90] GET /healthz: (4.058891ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:04.538546  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 09:10:04.546381  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.546412  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.546471  108216 httplog.go:90] GET /healthz: (1.276399ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.554223  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.062216ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.575295  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.036656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.575828  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:10:04.597738  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (3.996003ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.615759  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.543613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.616056  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:10:04.634964  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.634985  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.635024  108216 httplog.go:90] GET /healthz: (1.007161ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:04.635318  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.051706ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.645728  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.645771  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.645818  108216 httplog.go:90] GET /healthz: (963.286µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.656140  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.633066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.656323  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:10:04.675664  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (2.380755ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.696058  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.622655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.696381  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:10:04.716481  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (3.184786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.734502  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.734530  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.734564  108216 httplog.go:90] GET /healthz: (984.358µs) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:04.736448  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.115651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.736698  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:10:04.745708  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.745750  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.745782  108216 httplog.go:90] GET /healthz: (913.687µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.754510  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.222081ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.777210  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.871404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.777481  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:10:04.794382  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.155213ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.815731  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.460328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.816151  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:10:04.835384  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.835411  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.835447  108216 httplog.go:90] GET /healthz: (1.448231ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:04.835748  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.504285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.845769  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.845795  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.845829  108216 httplog.go:90] GET /healthz: (967.534µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.855407  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.159086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.855725  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:10:04.874609  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.455733ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.895458  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.125656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.895737  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:10:04.914826  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.527098ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.936976  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.937017  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.937053  108216 httplog.go:90] GET /healthz: (2.633314ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:04.937395  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.411772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.937576  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:10:04.946019  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:04.946058  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:04.946097  108216 httplog.go:90] GET /healthz: (1.224883ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.954275  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.068772ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.975142  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.975453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:04.975525  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:10:04.997804  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.403203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.034148  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.174842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.034409  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:10:05.035094  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.035126  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.035167  108216 httplog.go:90] GET /healthz: (1.221617ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:05.035517  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (921.483µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.045986  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.046018  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.046064  108216 httplog.go:90] GET /healthz: (1.120142ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.056002  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.061335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.056244  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:10:05.074487  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.196089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.095372  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.038337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.095671  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:10:05.114589  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.330964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.135565  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.135603  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.135637  108216 httplog.go:90] GET /healthz: (1.235358ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:05.136105  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.861592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.136674  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:10:05.148176  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.148206  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.148244  108216 httplog.go:90] GET /healthz: (1.147551ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.154293  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.092012ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.175096  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.798369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.175313  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:10:05.194536  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.280224ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.215393  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.07119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.215658  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:10:05.234561  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.31849ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.234988  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.235008  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.235041  108216 httplog.go:90] GET /healthz: (1.181308ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:05.246134  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.246170  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.246206  108216 httplog.go:90] GET /healthz: (1.263304ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.255274  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.033001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.255555  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:10:05.274421  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.179991ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.295524  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.268984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.295889  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:10:05.314756  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.449872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.335325  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.335355  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.335406  108216 httplog.go:90] GET /healthz: (1.378786ms) 0 [Go-http-client/1.1 127.0.0.1:46808]
I0919 09:10:05.336120  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.83399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.336356  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:10:05.345857  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.345886  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.345930  108216 httplog.go:90] GET /healthz: (1.01714ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.354352  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.090058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.375828  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.116988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.376375  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:10:05.394563  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.264922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.419957  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.743773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.420234  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:10:05.434493  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.204575ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.434712  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.434737  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.434779  108216 httplog.go:90] GET /healthz: (869.131µs) 0 [Go-http-client/1.1 127.0.0.1:46808]
I0919 09:10:05.445982  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.446019  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.446070  108216 httplog.go:90] GET /healthz: (1.164674ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.455462  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.258561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.455715  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:10:05.474551  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.258956ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.495394  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.110625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.495635  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:10:05.514186  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (959.552µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.535376  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.09414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.535677  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:10:05.535678  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.535713  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.535749  108216 httplog.go:90] GET /healthz: (1.34246ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:05.545707  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.545730  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.545766  108216 httplog.go:90] GET /healthz: (964.789µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.554356  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.106952ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.575481  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.15159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.575801  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:10:05.594441  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.207316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.597160  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.305659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.615053  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.805291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.615288  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 09:10:05.634621  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.634680  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.634731  108216 httplog.go:90] GET /healthz: (1.219334ms) 0 [Go-http-client/1.1 127.0.0.1:46808]
I0919 09:10:05.634764  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.592376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.636373  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.05281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.646353  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.646383  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.646431  108216 httplog.go:90] GET /healthz: (1.559246ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.655210  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.012084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.655432  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:10:05.674472  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.184667ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.676458  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.529627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.695369  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.138905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.695607  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:10:05.714854  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.13328ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.716725  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.44743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.734457  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.734497  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.734539  108216 httplog.go:90] GET /healthz: (1.021274ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:05.735151  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.88826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.735390  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:10:05.745780  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.745827  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.745861  108216 httplog.go:90] GET /healthz: (1.017135ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.754207  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (960.248µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.755770  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.041191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.775258  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.993433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.775725  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:10:05.794407  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.164753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.796485  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.566277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.815268  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.984375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.815599  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:10:05.834487  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.236322ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.835862  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.835912  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.835942  108216 httplog.go:90] GET /healthz: (1.957948ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:05.836122  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.210471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.845934  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.845987  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.846024  108216 httplog.go:90] GET /healthz: (1.2046ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.855214  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.962429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.855488  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:10:05.874525  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.267154ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.876571  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.412889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.895581  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.330842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.895855  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:10:05.914490  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.188095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.916265  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.228798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.934960  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.684779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:05.935154  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 09:10:05.934967  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.935238  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.935272  108216 httplog.go:90] GET /healthz: (1.688355ms) 0 [Go-http-client/1.1 127.0.0.1:46966]
I0919 09:10:05.945807  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:05.945834  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:05.945870  108216 httplog.go:90] GET /healthz: (1.072698ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.954212  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.027793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.955784  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.041691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.975211  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.856641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.975498  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:10:05.994773  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.485498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:05.996975  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.38728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.015029  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.724337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.015417  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:10:06.035276  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:06.035322  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:06.035366  108216 httplog.go:90] GET /healthz: (1.756543ms) 0 [Go-http-client/1.1 127.0.0.1:46808]
I0919 09:10:06.036356  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (3.099647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.038708  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.663722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.048018  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:10:06.048048  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:10:06.048086  108216 httplog.go:90] GET /healthz: (975.394µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.055211  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.016751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.055975  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:10:06.075234  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.24301ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.077301  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.471465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.095270  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.017594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.095653  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:10:06.114751  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.331735ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.116695  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.293084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.135833  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.18015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.136295  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:10:06.136541  108216 httplog.go:90] GET /healthz: (812.732µs) 200 [Go-http-client/1.1 127.0.0.1:46808]
W0919 09:10:06.137408  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.137629  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.137769  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.138022  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.138112  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.138201  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.138284  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.138367  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.138436  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.138510  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:10:06.138612  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:10:06.138728  108216 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 09:10:06.138800  108216 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 09:10:06.139057  108216 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 09:10:06.139360  108216 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 09:10:06.139475  108216 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 09:10:06.140549  108216 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (571.048µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:06.141393  108216 get.go:251] Starting watch for /api/v1/pods, rv=30250 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m48s
I0919 09:10:06.145809  108216 httplog.go:90] GET /healthz: (1.004594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.147102  108216 httplog.go:90] GET /api/v1/namespaces/default: (996.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.150000  108216 httplog.go:90] POST /api/v1/namespaces: (2.53635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.153885  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (3.062235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.157388  108216 httplog.go:90] POST /api/v1/namespaces/default/services: (3.035283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.158567  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (787.996µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.160740  108216 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.772423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.239319  108216 shared_informer.go:227] caches populated
I0919 09:10:06.239347  108216 shared_informer.go:204] Caches are synced for scheduler 
I0919 09:10:06.239660  108216 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.239688  108216 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.239977  108216 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.239994  108216 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.240002  108216 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.240055  108216 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.240292  108216 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.240379  108216 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.240490  108216 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.241282  108216 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.241420  108216 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.240332  108216 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.241343  108216 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.242080  108216 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (1.271068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:06.242122  108216 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (1.31612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47198]
I0919 09:10:06.241167  108216 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.242160  108216 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.242232  108216 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.391558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47196]
I0919 09:10:06.241226  108216 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.242385  108216 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.241566  108216 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.242586  108216 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (1.511467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47200]
I0919 09:10:06.243961  108216 get.go:251] Starting watch for /api/v1/services, rv=30556 labels= fields= timeout=8m54s
I0919 09:10:06.244081  108216 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30252 labels= fields= timeout=8m16s
I0919 09:10:06.244115  108216 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (419.294µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47198]
I0919 09:10:06.244694  108216 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (371.836µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47200]
I0919 09:10:06.244748  108216 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (511.479µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47204]
I0919 09:10:06.245198  108216 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30250 labels= fields= timeout=6m26s
I0919 09:10:06.245334  108216 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.245349  108216 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 09:10:06.245568  108216 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30253 labels= fields= timeout=7m27s
I0919 09:10:06.245758  108216 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30252 labels= fields= timeout=5m5s
I0919 09:10:06.246079  108216 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (1.241638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47206]
I0919 09:10:06.246126  108216 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (245.874µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47212]
I0919 09:10:06.246274  108216 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30250 labels= fields= timeout=8m33s
I0919 09:10:06.246310  108216 get.go:251] Starting watch for /api/v1/nodes, rv=30250 labels= fields= timeout=8m47s
I0919 09:10:06.246664  108216 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30253 labels= fields= timeout=6m17s
I0919 09:10:06.247208  108216 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (3.001528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47196]
I0919 09:10:06.247831  108216 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30252 labels= fields= timeout=7m11s
I0919 09:10:06.247853  108216 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30250 labels= fields= timeout=9m57s
I0919 09:10:06.339636  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339697  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339704  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339711  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339718  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339724  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339730  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339737  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339743  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339753  108216 shared_informer.go:227] caches populated
I0919 09:10:06.339764  108216 shared_informer.go:227] caches populated
I0919 09:10:06.342380  108216 httplog.go:90] POST /api/v1/nodes: (2.135819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.343254  108216 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0919 09:10:06.345904  108216 httplog.go:90] PUT /api/v1/nodes/testnode/status: (2.114581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.348081  108216 httplog.go:90] POST /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods: (1.656912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.348527  108216 scheduling_queue.go:830] About to try and schedule pod node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pidpressure-fake-name
I0919 09:10:06.348549  108216 scheduler.go:530] Attempting to schedule pod: node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pidpressure-fake-name
I0919 09:10:06.348734  108216 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pidpressure-fake-name", node "testnode"
I0919 09:10:06.348760  108216 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0919 09:10:06.348819  108216 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0919 09:10:06.350980  108216 httplog.go:90] POST /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name/binding: (1.850921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.351612  108216 scheduler.go:662] pod node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0919 09:10:06.353614  108216 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/events: (1.719349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.450954  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.969568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.551595  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.815777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.650543  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.78474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.751005  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.327148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.850559  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.808929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:06.950793  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.947826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.050602  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.802767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.150392  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.602331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.243419  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:07.243708  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:07.243724  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:07.245052  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:07.245476  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:07.247284  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:07.250363  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.61084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.350585  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.835759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.450837  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.03744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.550562  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.825537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.650549  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.715126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.750731  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.937316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.851175  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.411518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:07.950538  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.809123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.050437  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.659914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.151509  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.777535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.243588  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:08.243916  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:08.243952  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:08.245198  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:08.245614  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:08.247438  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:08.250504  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.72246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.350432  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.655204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.452964  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (4.200775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.550275  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.555141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.650189  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.472965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.750390  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.655468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.850671  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.906408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:08.950402  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.621161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.050899  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.07096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.150371  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.662703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.244082  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:09.244135  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:09.244158  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:09.245712  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:09.246907  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:09.247579  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:09.250332  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.606043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.350302  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.528294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.450527  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.736629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.550690  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.946942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.650871  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.670398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.750468  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.810879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.850513  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.720153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:09.950595  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.795401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.050617  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.840894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.150018  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.335906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.244274  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:10.244319  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:10.244333  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:10.245909  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:10.247081  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:10.247710  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:10.250318  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.634176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.350387  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.652044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.450721  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.961431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.550415  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.643925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.650288  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.576646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.750282  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.582815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.850325  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.57638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:10.951152  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.416624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.050406  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.667812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.150288  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.593136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.244449  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:11.244497  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:11.244511  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:11.246075  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:11.247231  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:11.247863  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:11.250330  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.612601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.350362  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.651513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.450435  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.67463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.550456  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.719518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.650309  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.63023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.750318  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.655299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.850293  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.577522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:11.950315  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.58553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.050562  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.781512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.150044  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.416001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.244628  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:12.244701  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:12.244716  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:12.246238  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:12.247364  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:12.248001  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:12.250335  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.684917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.350631  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.916095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.454866  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.049798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.550538  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.809977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.650397  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.67637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.750761  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.918153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.850918  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.170999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:12.963476  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.651353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.051793  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.657393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.154477  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.906939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.244812  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:13.244912  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:13.244926  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:13.246406  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:13.247488  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:13.248322  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:13.250377  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.667117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.350944  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.256769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.450505  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.78562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.550557  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.819086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.650571  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.851744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.750531  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.83753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.850267  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.624767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:13.950430  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.673253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:14.050892  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.067226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:14.150531  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.758735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:14.245001  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:14.245034  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:14.245011  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:14.246572  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:14.253330  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:14.253373  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:14.255967  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.906886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:14.352968  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.763267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:14.450484  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.72781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:14.522840  108216 factory.go:606] Attempting to bind signalling-pod to test-node-1
I0919 09:10:14.523662  108216 scheduler.go:500] Failed to bind pod: permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod
E0919 09:10:14.523683  108216 scheduler.go:502] scheduler cache ForgetPod failed: pod 0d30cf2f-9534-42dd-9096-75e22a77b427 wasn't assumed so cannot be forgotten
E0919 09:10:14.523701  108216 scheduler.go:653] error binding pod: Post http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod/binding: dial tcp 127.0.0.1:42219: connect: connection refused
E0919 09:10:14.523732  108216 factory.go:557] Error scheduling permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod: Post http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod/binding: dial tcp 127.0.0.1:42219: connect: connection refused; retrying
I0919 09:10:14.523776  108216 factory.go:615] Updating pod condition for permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0919 09:10:14.524490  108216 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:42219/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/events: dial tcp 127.0.0.1:42219: connect: connection refused' (may retry after sleeping)
E0919 09:10:14.524545  108216 factory.go:590] Error getting pod permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod for retry: Get http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod: dial tcp 127.0.0.1:42219: connect: connection refused; retrying...
E0919 09:10:14.524599  108216 scheduler.go:333] Error updating the condition of the pod permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod: Put http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod/status: dial tcp 127.0.0.1:42219: connect: connection refused
I0919 09:10:14.550680  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.897876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:14.650354  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.63838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
E0919 09:10:14.725319  108216 factory.go:590] Error getting pod permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod for retry: Get http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod: dial tcp 127.0.0.1:42219: connect: connection refused; retrying...
I0919 09:10:14.750604  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.844914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:14.850752  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.025797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:14.950585  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.457219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:15.050446  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.687993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
E0919 09:10:15.126127  108216 factory.go:590] Error getting pod permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod for retry: Get http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod: dial tcp 127.0.0.1:42219: connect: connection refused; retrying...
I0919 09:10:15.154103  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (4.95078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:15.245234  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:15.245330  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:15.245346  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:15.246694  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:15.250889  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.001733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:15.253546  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:15.253579  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:15.350587  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.852435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:15.450328  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.592977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:15.550407  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.654157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:15.652529  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (3.761005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:15.750407  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.646967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:15.850533  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.753287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
E0919 09:10:15.926779  108216 factory.go:590] Error getting pod permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod for retry: Get http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod: dial tcp 127.0.0.1:42219: connect: connection refused; retrying...
I0919 09:10:15.951552  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.782382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:16.051390  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.596475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:16.148016  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.620344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:16.152623  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (4.229608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:16.154320  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.277616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:16.155423  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (6.53359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:16.245407  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:16.245427  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:16.245407  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:16.246823  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:16.250288  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.671097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:16.253670  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:16.253704  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:16.350275  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.594015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:16.450244  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.535616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:16.550944  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.098029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:16.650434  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.731139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:16.750507  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.728877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:16.850484  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.63495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:16.950338  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.540256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:17.050482  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.753911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:17.150335  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.590053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:17.245569  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:17.245611  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:17.245633  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:17.246979  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:17.250552  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.746895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:17.253816  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:17.253845  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:17.350152  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.445602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:17.450321  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.546445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
E0919 09:10:17.527322  108216 factory.go:590] Error getting pod permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod for retry: Get http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod: dial tcp 127.0.0.1:42219: connect: connection refused; retrying...
I0919 09:10:17.550255  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.556364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:17.650512  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.729074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:17.749942  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.302331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:17.850652  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.969659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:17.950138  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.361806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.050223  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.577435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.150511  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.739786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.246230  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:18.246279  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:18.246331  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:18.247124  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:18.250354  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.640701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.253979  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:18.254022  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:18.351464  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.664552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.450491  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.768943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.550264  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.484328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.650237  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.582283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.750221  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.494897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.850567  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.786956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:18.950510  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.697859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.050557  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.765193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.150417  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.565176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.246354  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:19.246879  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:19.246926  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:19.247257  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:19.250108  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.373878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.254104  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:19.254139  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:19.357346  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (8.291967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.450338  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.556151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.550446  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.627269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.650306  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.526747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.750183  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.457855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.850781  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.993433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:19.950410  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.595137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:20.050454  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.667972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:20.150350  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.520531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:20.246688  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:20.247056  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:20.247099  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:20.247417  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:20.250634  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.784856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:20.254274  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:20.254302  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:20.350563  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.686317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:20.450546  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.668181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:20.550334  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.537453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:20.650353  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.534681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
E0919 09:10:20.727929  108216 factory.go:590] Error getting pod permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod for retry: Get http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod: dial tcp 127.0.0.1:42219: connect: connection refused; retrying...
I0919 09:10:20.750434  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.349241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:20.852108  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (3.21787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:20.950579  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.754136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.050298  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.419184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.150411  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.669459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.246898  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:21.247241  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:21.247267  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:21.247604  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:21.250496  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.664827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.254509  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:21.254516  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:21.350538  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.733323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.450357  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.543964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.550619  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.770436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.650528  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.736092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.750431  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.622736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.850678  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.805527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:21.950731  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.940293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.050593  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.830543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.151167  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.368329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.247060  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:22.247362  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:22.247391  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:22.247709  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:22.251715  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.624322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.254623  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:22.254711  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:22.352301  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (3.361768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.450540  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.720039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.556836  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (8.02187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.650540  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.656955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.751525  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.739105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.850480  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.650145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:22.950634  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.758912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.050875  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.932014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.151019  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.155226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.247263  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:23.247525  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:23.247575  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:23.247904  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:23.250498  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.716699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.254854  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:23.254908  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:23.350417  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.590057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.450805  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.930836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.550463  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.641475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.650350  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.490036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.752724  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (3.903186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.850487  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.661192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:23.950515  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.669465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.050580  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.739509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.150288  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.450155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.247444  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:24.247706  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:24.247751  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:24.248066  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:24.250989  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.265293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.254966  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:24.254985  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:24.360003  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (9.97438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.450440  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.557218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.550474  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.666206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.650293  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.492268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.750465  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.723792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.850726  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.939134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:24.950776  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.89437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:25.050456  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.599284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:25.150391  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.577576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:25.247678  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:25.247936  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:25.247965  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:25.248276  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:25.250353  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.607197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:25.255123  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:25.255135  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:25.351445  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.659652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:25.451010  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.894144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:25.550716  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.925595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:25.650532  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.764425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:25.750924  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.086066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:25.850948  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.146108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
E0919 09:10:25.870682  108216 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:42219/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/events: dial tcp 127.0.0.1:42219: connect: connection refused' (may retry after sleeping)
I0919 09:10:25.950744  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.848932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.050213  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.425582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.148531  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.973212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.151902  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.804358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.151915  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.239435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:26.153936  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.43318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.247858  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:26.248132  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:26.248181  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:26.250092  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:26.250824  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.003903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.255493  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:26.255524  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:26.353426  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (4.64998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.450574  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.697575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.550363  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.534118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.650568  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.787532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.750520  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.769494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.850472  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.674799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:26.950436  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.616929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:27.050823  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.724235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
E0919 09:10:27.128625  108216 factory.go:590] Error getting pod permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/signalling-pod for retry: Get http://127.0.0.1:42219/api/v1/namespaces/permit-plugin7ad1501e-ef20-4720-a5c5-4bbd86bc1c23/pods/signalling-pod: dial tcp 127.0.0.1:42219: connect: connection refused; retrying...
I0919 09:10:27.150688  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.833107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:27.248097  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:27.248589  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:27.248743  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:27.250216  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:27.250709  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.850106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:27.255680  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:27.255709  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:27.350276  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.553386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:27.450870  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.996259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:27.550765  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.945059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:27.650379  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.630595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:27.750779  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.889698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:27.851289  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.378843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:27.950446  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.587151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.069915  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (3.97807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.150928  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.678047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.248263  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:28.249683  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:28.250938  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:28.251102  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:28.252083  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.267228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.256426  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:28.256455  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:28.352284  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (3.381397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.451463  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.618604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.551580  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.843204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.651664  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.795637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.750151  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.366656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.850540  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.711776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:28.950536  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.61206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.050558  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.689657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.150472  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.693323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.248523  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:29.249877  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:29.250920  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.087876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.251356  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:29.251431  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:29.256588  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:29.256625  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:29.351223  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.372168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.452365  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (3.550611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.550983  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.162325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.650983  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.164825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.750689  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.764847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.851027  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.286539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:29.950588  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.348132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.050955  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.489929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.151351  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (2.534976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.248723  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:30.250041  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:30.250710  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.575759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.251515  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:30.251517  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:30.256722  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:30.256782  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:30.350571  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.767776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.450329  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.616499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.550719  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.893799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.650165  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.509049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.750261  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.514624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.850301  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.565974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:30.950502  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.62515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.050673  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.869813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.150705  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.959668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.249219  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:31.250051  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.367882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.250254  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:31.251688  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:31.251996  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:31.256891  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:31.256954  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:31.350552  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.800012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.450165  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.513417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.550266  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.434142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.650490  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.72785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.750399  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.68299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.850591  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.752264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:31.950213  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.486202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.050190  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.441937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.150250  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.490695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.249381  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:32.250789  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:32.251856  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:32.252232  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:32.252714  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.562729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.257004  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:32.265820  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:32.350395  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.607499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.450669  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.8463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.550219  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.502917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.650344  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.599133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.750389  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.644328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.855310  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.321127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:32.954222  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (3.712618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.052378  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.680833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.150326  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.583765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.249526  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:33.250487  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.718075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.250991  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:33.252486  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:33.252557  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:33.257170  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:33.265953  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:33.350294  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.48374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.450576  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.849571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.550232  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.423018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.650150  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.456165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.750681  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.946818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.850287  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.511382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:33.950106  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.375859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.050236  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.517487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.150366  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.62365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.250553  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.825486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.251313  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:34.252203  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:34.252606  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:34.252716  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:34.257288  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:34.266997  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:34.350075  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.330601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.450359  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.637138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.550203  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.451361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.650140  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.391864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.750272  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.584808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.853390  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (4.648121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:34.950215  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.322207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.050202  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.432424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.150329  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.639124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.250223  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.505296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.251554  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:35.252345  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:35.253589  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:35.253671  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:35.257845  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:35.267222  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:35.350277  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.524308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.450260  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.440293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.550394  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.655633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.650101  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.356266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.750421  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.674745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.850355  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.554503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:35.950252  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.501687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.050503  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.699324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.147938  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.246859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.149755  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.313867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.150480  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.314997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0919 09:10:36.151243  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (880.645µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.250164  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.390494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.251724  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:36.252499  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:36.253797  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:36.259749  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:36.259853  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:36.270990  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:10:36.350411  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.677939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.352124  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.266195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.356713  108216 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (4.185568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.359729  108216 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pods/pidpressure-fake-name: (1.186756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
E0919 09:10:36.360266  108216 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0919 09:10:36.360594  108216 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30253&timeout=6m17s&timeoutSeconds=377&watch=true: (30.114161628s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47212]
I0919 09:10:36.360743  108216 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30250&timeout=8m47s&timeoutSeconds=527&watch=true: (30.115007912s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47204]
I0919 09:10:36.360863  108216 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30250&timeout=9m57s&timeoutSeconds=597&watch=true: (30.113299624s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47196]
I0919 09:10:36.360979  108216 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30252&timeout=7m11s&timeoutSeconds=431&watch=true: (30.113550132s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47206]
I0919 09:10:36.361086  108216 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30252&timeout=8m16s&timeoutSeconds=496&watch=true: (30.117324276s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47202]
I0919 09:10:36.361192  108216 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30556&timeout=8m54s&timeoutSeconds=534&watch=true: (30.117519527s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46966]
I0919 09:10:36.361296  108216 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30250&timeout=6m26s&timeoutSeconds=386&watch=true: (30.116346516s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47218]
I0919 09:10:36.361391  108216 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30253&timeout=7m27s&timeoutSeconds=447&watch=true: (30.116041989s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47200]
I0919 09:10:36.361488  108216 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30252&timeout=5m5s&timeoutSeconds=305&watch=true: (30.116034084s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47216]
I0919 09:10:36.361586  108216 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30250&timeoutSeconds=588&watch=true: (30.220477539s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46808]
I0919 09:10:36.362467  108216 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30250&timeout=8m33s&timeoutSeconds=513&watch=true: (30.116431805s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47210]
I0919 09:10:36.365370  108216 httplog.go:90] DELETE /api/v1/nodes: (3.55834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.365521  108216 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 09:10:36.366692  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (945.873µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0919 09:10:36.368408  108216 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.387537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
--- FAIL: TestNodePIDPressure (33.72s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-090243.xml

Find node-pid-pressurefe8e01c5-4b6b-4cab-ab0a-61a181cb3ce4/pidpressure-fake-name mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.18s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W0919 09:12:16.727152  108216 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 09:12:16.727562  108216 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 09:12:16.727581  108216 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 09:12:16.727593  108216 master.go:259] Using reconciler: 
I0919 09:12:16.730234  108216 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.731757  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.731936  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.733288  108216 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 09:12:16.733354  108216 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 09:12:16.733692  108216 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.734372  108216 watch_cache.go:405] Replace watchCache (rev: 46653) 
I0919 09:12:16.734821  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.735012  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.736409  108216 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:12:16.736541  108216 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:12:16.736740  108216 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.737605  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.737782  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.739181  108216 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 09:12:16.739225  108216 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.739375  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.739392  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.739462  108216 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 09:12:16.740110  108216 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 09:12:16.740280  108216 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.740506  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.740529  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.740727  108216 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 09:12:16.741043  108216 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 09:12:16.741128  108216 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 09:12:16.741203  108216 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.741424  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.741447  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.742080  108216 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 09:12:16.742241  108216 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.742270  108216 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 09:12:16.742393  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.742406  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.744919  108216 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 09:12:16.745076  108216 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.745105  108216 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 09:12:16.745266  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.745288  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.746217  108216 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 09:12:16.746332  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.746483  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.746500  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.746596  108216 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 09:12:16.747750  108216 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 09:12:16.748035  108216 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.747837  108216 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 09:12:16.749261  108216 watch_cache.go:405] Replace watchCache (rev: 46654) 
I0919 09:12:16.749410  108216 watch_cache.go:405] Replace watchCache (rev: 46654) 
I0919 09:12:16.749473  108216 watch_cache.go:405] Replace watchCache (rev: 46654) 
I0919 09:12:16.749520  108216 watch_cache.go:405] Replace watchCache (rev: 46654) 
I0919 09:12:16.749532  108216 watch_cache.go:405] Replace watchCache (rev: 46654) 
I0919 09:12:16.749577  108216 watch_cache.go:405] Replace watchCache (rev: 46654) 
I0919 09:12:16.749624  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.751153  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.751978  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.752085  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.752750  108216 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 09:12:16.752899  108216 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 09:12:16.753094  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.753404  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.753504  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.754136  108216 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 09:12:16.754215  108216 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 09:12:16.754535  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.755361  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.755739  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.755839  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.756584  108216 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 09:12:16.756712  108216 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 09:12:16.756740  108216 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.756924  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.756947  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.757862  108216 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 09:12:16.757969  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.758128  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.758147  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.758232  108216 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 09:12:16.758977  108216 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 09:12:16.759064  108216 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 09:12:16.759444  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.759693  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.759771  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.760551  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.760556  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.760575  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.761262  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.761367  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.761935  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.762134  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.762309  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.762321  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.762807  108216 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 09:12:16.762834  108216 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 09:12:16.762924  108216 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 09:12:16.763223  108216 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.763486  108216 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.764472  108216 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.765238  108216 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.765945  108216 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.766627  108216 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.767150  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.767412  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.767697  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.767861  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.768142  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.768780  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.769194  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.770101  108216 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.770501  108216 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.771255  108216 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.771630  108216 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.772459  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.772829  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.773085  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.773375  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.773669  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.773921  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.774205  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.775058  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.775476  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.776759  108216 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.777672  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.778079  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.778469  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.779305  108216 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.779710  108216 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.780442  108216 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.780990  108216 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.781437  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.782038  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.782263  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.782366  108216 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 09:12:16.782392  108216 master.go:461] Enabling API group "authentication.k8s.io".
I0919 09:12:16.782421  108216 master.go:461] Enabling API group "authorization.k8s.io".
I0919 09:12:16.782591  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.782912  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.782946  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.784232  108216 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:12:16.784329  108216 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:12:16.785140  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.785211  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.786264  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.786296  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.787034  108216 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:12:16.787089  108216 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:12:16.787828  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.789425  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.789602  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.789625  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.790699  108216 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:12:16.790809  108216 master.go:461] Enabling API group "autoscaling".
I0919 09:12:16.790745  108216 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:12:16.791788  108216 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.792197  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.792218  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.792964  108216 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 09:12:16.793056  108216 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 09:12:16.793137  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.793135  108216 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.793347  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.793373  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.793952  108216 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 09:12:16.793977  108216 master.go:461] Enabling API group "batch".
I0919 09:12:16.794003  108216 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 09:12:16.794146  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.794236  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.794387  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.794412  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.794894  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.795109  108216 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 09:12:16.795133  108216 master.go:461] Enabling API group "certificates.k8s.io".
I0919 09:12:16.795208  108216 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 09:12:16.795270  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.795461  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.795484  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.796301  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.796384  108216 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:12:16.796500  108216 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:12:16.796529  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.796769  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.796789  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.797334  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.797399  108216 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:12:16.797416  108216 master.go:461] Enabling API group "coordination.k8s.io".
I0919 09:12:16.797428  108216 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 09:12:16.797530  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.797542  108216 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:12:16.797711  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.797732  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.798369  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.798590  108216 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:12:16.798615  108216 master.go:461] Enabling API group "extensions".
I0919 09:12:16.798706  108216 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:12:16.798808  108216 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.799009  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.799030  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.799560  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.800012  108216 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 09:12:16.800056  108216 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 09:12:16.800152  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.800303  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.800325  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.800907  108216 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:12:16.800931  108216 master.go:461] Enabling API group "networking.k8s.io".
I0919 09:12:16.800932  108216 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:12:16.800962  108216 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.801165  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.801192  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.801526  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.801877  108216 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 09:12:16.801898  108216 master.go:461] Enabling API group "node.k8s.io".
I0919 09:12:16.801945  108216 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 09:12:16.802052  108216 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.802238  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.802265  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.802434  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.802757  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.803024  108216 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 09:12:16.803085  108216 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 09:12:16.803192  108216 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.803491  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.803517  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.804377  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.804400  108216 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 09:12:16.804418  108216 master.go:461] Enabling API group "policy".
I0919 09:12:16.804450  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.804466  108216 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 09:12:16.804776  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.804804  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.805774  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.805936  108216 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:12:16.806041  108216 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:12:16.806056  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.806241  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.806253  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.806948  108216 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:12:16.806990  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.807057  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.807124  108216 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:12:16.807199  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.807217  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.808006  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.810920  108216 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:12:16.811076  108216 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:12:16.811114  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.811328  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.811349  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.812081  108216 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:12:16.812144  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.812279  108216 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:12:16.812089  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.812374  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.812393  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.812977  108216 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:12:16.813106  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.813195  108216 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:12:16.813254  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.813284  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.813311  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.814103  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.814896  108216 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:12:16.815038  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.814932  108216 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:12:16.815674  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.815794  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.815907  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.816396  108216 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:12:16.816535  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.816717  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.816733  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.816792  108216 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:12:16.817301  108216 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:12:16.817324  108216 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 09:12:16.817945  108216 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:12:16.818333  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.818608  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.818763  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.818959  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.818975  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.819490  108216 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:12:16.819579  108216 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:12:16.819655  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.820024  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.820048  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.820618  108216 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:12:16.820633  108216 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 09:12:16.820758  108216 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 09:12:16.820783  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.820887  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.820930  108216 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:12:16.821131  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.821166  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.825747  108216 watch_cache.go:405] Replace watchCache (rev: 46655) 
I0919 09:12:16.831047  108216 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:12:16.831175  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.831339  108216 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:12:16.831950  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.833373  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.833394  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.834753  108216 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:12:16.834927  108216 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.835307  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.836829  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.834877  108216 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:12:16.841358  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.841415  108216 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 09:12:16.841452  108216 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.841628  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.841665  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.841751  108216 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 09:12:16.842711  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.843135  108216 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 09:12:16.843308  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.843560  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.843593  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.843696  108216 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 09:12:16.845304  108216 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:12:16.845500  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.845717  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.845747  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.845839  108216 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:12:16.846114  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.846418  108216 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:12:16.846436  108216 master.go:461] Enabling API group "storage.k8s.io".
I0919 09:12:16.846475  108216 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:12:16.846554  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.846752  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.846781  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.847264  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.847605  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.847658  108216 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 09:12:16.847768  108216 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 09:12:16.847818  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.848156  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.848183  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.848231  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.849911  108216 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 09:12:16.850049  108216 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.850216  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.850237  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.850302  108216 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 09:12:16.851134  108216 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 09:12:16.851287  108216 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 09:12:16.851659  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.851398  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.852391  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.852724  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.852745  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.855308  108216 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 09:12:16.855439  108216 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 09:12:16.855704  108216 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.856018  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.856129  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.856238  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.857116  108216 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 09:12:16.857142  108216 master.go:461] Enabling API group "apps".
I0919 09:12:16.857169  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.857204  108216 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 09:12:16.857360  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.857379  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.857911  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.858157  108216 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:12:16.858182  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.858342  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.858359  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.858430  108216 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:12:16.859182  108216 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:12:16.859219  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.859290  108216 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:12:16.859369  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.859407  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.859419  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.860371  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.860416  108216 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:12:16.860450  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.860605  108216 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:12:16.860715  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.860738  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.861766  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.862741  108216 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:12:16.862760  108216 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 09:12:16.862789  108216 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.863110  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:16.863132  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:16.863210  108216 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:12:16.864932  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.865502  108216 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:12:16.865519  108216 master.go:461] Enabling API group "events.k8s.io".
I0919 09:12:16.865560  108216 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:12:16.865721  108216 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.865852  108216 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.866089  108216 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.866179  108216 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.866238  108216 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.866315  108216 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.866446  108216 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.866503  108216 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.866576  108216 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.866638  108216 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.866681  108216 watch_cache.go:405] Replace watchCache (rev: 46656) 
I0919 09:12:16.867534  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.867880  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.868621  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.868933  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.869564  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.869837  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.870394  108216 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.870627  108216 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.871197  108216 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.871409  108216 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:12:16.871502  108216 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 09:12:16.872157  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.872301  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.872568  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.873205  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.873898  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.874718  108216 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.874968  108216 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.875627  108216 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.876240  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.876457  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.877276  108216 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:12:16.877338  108216 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 09:12:16.878065  108216 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.878339  108216 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.878798  108216 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.879288  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.879686  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.880502  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.880994  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.881485  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.882118  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.882721  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.883254  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:12:16.883316  108216 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 09:12:16.883864  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.884342  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:12:16.884406  108216 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 09:12:16.884986  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.885504  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.885830  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.886243  108216 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.886614  108216 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.887168  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.887770  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:12:16.887834  108216 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 09:12:16.888493  108216 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.889269  108216 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.889525  108216 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.890072  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.890254  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.890544  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.891114  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.891370  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.891631  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.892162  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.892405  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.892661  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:12:16.892734  108216 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 09:12:16.892741  108216 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 09:12:16.893422  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.894039  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.894668  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.895307  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.896174  108216 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4c7c91c7-6060-4101-b461-34059a05c20f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:12:16.899178  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:16.899206  108216 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 09:12:16.899217  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:16.899228  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:16.899237  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:16.899244  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:16.899279  108216 httplog.go:90] GET /healthz: (210.297µs) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:16.900268  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.240553ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:16.903253  108216 httplog.go:90] GET /api/v1/services: (1.653712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:16.916494  108216 httplog.go:90] GET /api/v1/services: (5.517225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:16.919774  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:16.919812  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:16.919826  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:16.919837  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:16.919848  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:16.919883  108216 httplog.go:90] GET /healthz: (266.651µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:16.920490  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (924.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:16.922597  108216 httplog.go:90] GET /api/v1/services: (1.318864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:16.923221  108216 httplog.go:90] POST /api/v1/namespaces: (1.296592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:16.925025  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.472393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:16.925439  108216 httplog.go:90] GET /api/v1/services: (1.004837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:16.927516  108216 httplog.go:90] POST /api/v1/namespaces: (1.98697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:16.932046  108216 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (3.970134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:16.935858  108216 httplog.go:90] POST /api/v1/namespaces: (3.248954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.000027  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.000058  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.000067  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.000073  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.000079  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.000127  108216 httplog.go:90] GET /healthz: (222.486µs) 0 [Go-http-client/1.1 127.0.0.1:54970]
I0919 09:12:17.020628  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.020691  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.020704  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.020713  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.020721  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.020760  108216 httplog.go:90] GET /healthz: (298.49µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.100000  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.100037  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.100045  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.100051  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.100057  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.100081  108216 httplog.go:90] GET /healthz: (228.573µs) 0 [Go-http-client/1.1 127.0.0.1:54970]
I0919 09:12:17.120625  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.120692  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.120704  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.120714  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.120721  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.120779  108216 httplog.go:90] GET /healthz: (295.926µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.200086  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.200113  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.200125  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.200135  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.200144  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.200190  108216 httplog.go:90] GET /healthz: (263.612µs) 0 [Go-http-client/1.1 127.0.0.1:54970]
I0919 09:12:17.220584  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.220610  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.220618  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.220624  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.220630  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.220665  108216 httplog.go:90] GET /healthz: (201.634µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.300008  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.300035  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.300043  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.300052  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.300058  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.300091  108216 httplog.go:90] GET /healthz: (224.381µs) 0 [Go-http-client/1.1 127.0.0.1:54970]
I0919 09:12:17.320570  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.320610  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.320622  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.320631  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.320667  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.320713  108216 httplog.go:90] GET /healthz: (286.663µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.400098  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.400127  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.400139  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.400148  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.400156  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.400195  108216 httplog.go:90] GET /healthz: (250.493µs) 0 [Go-http-client/1.1 127.0.0.1:54970]
I0919 09:12:17.420526  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.420552  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.420561  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.420568  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.420574  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.420600  108216 httplog.go:90] GET /healthz: (189.85µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.500111  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.500144  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.500158  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.500167  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.500175  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.500207  108216 httplog.go:90] GET /healthz: (243.989µs) 0 [Go-http-client/1.1 127.0.0.1:54970]
I0919 09:12:17.520824  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.520870  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.520890  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.520901  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.520913  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.520948  108216 httplog.go:90] GET /healthz: (324.242µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.600087  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.600116  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.600127  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.600133  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.600139  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.600179  108216 httplog.go:90] GET /healthz: (220.962µs) 0 [Go-http-client/1.1 127.0.0.1:54970]
I0919 09:12:17.620588  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.620619  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.620632  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.620655  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.620663  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.620713  108216 httplog.go:90] GET /healthz: (240.69µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.700028  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.700067  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.700082  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.700092  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.700099  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.700131  108216 httplog.go:90] GET /healthz: (249.94µs) 0 [Go-http-client/1.1 127.0.0.1:54970]
I0919 09:12:17.720508  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:12:17.720537  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.720548  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.720557  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.720566  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.720594  108216 httplog.go:90] GET /healthz: (204.985µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.728229  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:12:17.728293  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:12:17.802823  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.802859  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.802870  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.802879  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.802924  108216 httplog.go:90] GET /healthz: (2.619979ms) 0 [Go-http-client/1.1 127.0.0.1:54970]
I0919 09:12:17.821351  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.821375  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.821382  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.821388  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.821426  108216 httplog.go:90] GET /healthz: (955.75µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.900996  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.271445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.901047  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.901066  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:12:17.901077  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:12:17.901085  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:12:17.901117  108216 httplog.go:90] GET /healthz: (1.018655ms) 0 [Go-http-client/1.1 127.0.0.1:55408]
I0919 09:12:17.901262  108216 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.641738ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.901441  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.666102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.902705  108216 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (893.966µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.902915  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.348011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.902978  108216 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.150257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55408]
I0919 09:12:17.903140  108216 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 09:12:17.904408  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.087878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54970]
I0919 09:12:17.904576  108216 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.31844ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.904577  108216 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.349292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.905669  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (696.681µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.906992  108216 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.119191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.907292  108216 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 09:12:17.907310  108216 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 09:12:17.907508  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.546139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.908584  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (673.546µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.910404  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.202212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.911298  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (563.56µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.912453  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (878.979µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.914370  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (755.093µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.916234  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.503166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.917045  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 09:12:17.918089  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (832.769µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.920081  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.493696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.920265  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 09:12:17.921322  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (814.33µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.921830  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:17.921860  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:17.921891  108216 httplog.go:90] GET /healthz: (1.033923ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:17.923510  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.923712  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 09:12:17.924772  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (881.204µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.927053  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.731419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.927408  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:12:17.928539  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (892.679µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.930461  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.53989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.930843  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 09:12:17.931941  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (803.786µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.933918  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.570857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.934088  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 09:12:17.934886  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (654.327µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.936786  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.307154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.937035  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 09:12:17.938139  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (808.204µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.940263  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.526818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.940766  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 09:12:17.941707  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (640.229µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.943845  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.67639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.944227  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 09:12:17.945365  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (750.096µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.947716  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.898665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.948071  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 09:12:17.949560  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.312635ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.951313  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.295572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.951523  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 09:12:17.952450  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (728.165µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.956463  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.397006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.956738  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 09:12:17.957905  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (831.632µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.959905  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.689948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.960084  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 09:12:17.961114  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (811.944µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.963798  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.509514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.964097  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 09:12:17.965406  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (855.723µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.971053  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.782625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.971273  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 09:12:17.973370  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.878148ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.975324  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.55832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.975554  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 09:12:17.976608  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (818.661µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.978457  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.280285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.978712  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 09:12:17.979581  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (643.504µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.981341  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.38474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.981545  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:12:17.982430  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (683.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.984253  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.331932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.984444  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 09:12:17.985796  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.162025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.987930  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.721823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.988225  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 09:12:17.989785  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.292209ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.993287  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.878134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.993620  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 09:12:17.995504  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.611906ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.997771  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.777226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:17.998165  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 09:12:17.999149  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (724.595µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.001149  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.001193  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.001228  108216 httplog.go:90] GET /healthz: (1.387637ms) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:18.001474  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.967969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.001702  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 09:12:18.002755  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (808.96µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.004496  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.326618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.004768  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:12:18.006152  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.2563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.008607  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.839722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.008816  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 09:12:18.009974  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (967.442µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.016052  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.648035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.016365  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:12:18.017911  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.221183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.020905  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.133114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.021012  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.021042  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.021069  108216 httplog.go:90] GET /healthz: (750.634µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.021316  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 09:12:18.022625  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.05649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.024763  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.649576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.024965  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:12:18.026026  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (878.08µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.028376  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.851161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.028715  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:12:18.029859  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (869.926µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.032130  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.769965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.032403  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:12:18.033506  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (921.015µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.035404  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.414967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.035962  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:12:18.037024  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (724.154µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.039343  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.877725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.039672  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:12:18.040678  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (719.943µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.042602  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.404735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.042862  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:12:18.043881  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (820.922µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.045752  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.527599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.045974  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:12:18.046860  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (708.883µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.048812  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.57149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.049006  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:12:18.050019  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (794.112µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.051723  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.376787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.051900  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:12:18.052786  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (727.832µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.054372  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.277828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.054564  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:12:18.055448  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (705.675µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.056859  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.057816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.057034  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:12:18.058190  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (942.2µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.059832  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.254954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.060067  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:12:18.061016  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (795.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.063100  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.75539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.063311  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:12:18.064955  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.497661ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.067370  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.975225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.069039  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:12:18.071387  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.588729ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.074255  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.239247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.074401  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:12:18.077275  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (2.703615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.079406  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.669998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.079606  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:12:18.084686  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (4.707454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.086884  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.72366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.087150  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:12:18.089483  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.346625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.092017  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.869924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.092318  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:12:18.094466  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.886498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.101442  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.101496  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.101566  108216 httplog.go:90] GET /healthz: (1.594809ms) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:18.101578  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.443414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.101810  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:12:18.103190  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.158343ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.107106  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.81332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.107520  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:12:18.109114  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (883.469µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.113129  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.09534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.113354  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:12:18.115793  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.933811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.118454  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.225088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.118816  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:12:18.119924  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (910.514µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.122480  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.656689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.122761  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.122789  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.122852  108216 httplog.go:90] GET /healthz: (2.479925ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.122873  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:12:18.124078  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (951.606µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.127415  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.708715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.127714  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:12:18.141491  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.726109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.163978  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.598852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.164289  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:12:18.181207  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.828006ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.202350  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.950541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.202835  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:12:18.203123  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.203216  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.203359  108216 httplog.go:90] GET /healthz: (3.575237ms) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:18.220390  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.106066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.221746  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.221771  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.221804  108216 httplog.go:90] GET /healthz: (1.299061ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.241231  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.894206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.241547  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 09:12:18.260572  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.274824ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.281493  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.03088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.281786  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 09:12:18.300725  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.300758  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.300807  108216 httplog.go:90] GET /healthz: (977.335µs) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:18.300931  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.569888ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.321297  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.867892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.321298  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.321391  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.321430  108216 httplog.go:90] GET /healthz: (1.018408ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.321544  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 09:12:18.340602  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.246967ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.361552  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.056097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.361794  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:12:18.380222  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (885.464µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.401217  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.401269  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.401317  108216 httplog.go:90] GET /healthz: (1.054429ms) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:18.401491  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.09801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.401763  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 09:12:18.420600  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.24786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.421224  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.421247  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.421278  108216 httplog.go:90] GET /healthz: (860.688µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.441201  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.821939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.441444  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:12:18.460337  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.015832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.481545  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.220083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.481797  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 09:12:18.500937  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.500969  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.501007  108216 httplog.go:90] GET /healthz: (863.655µs) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:18.501339  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.940293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.521587  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.132657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.521842  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.521871  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.521899  108216 httplog.go:90] GET /healthz: (978.44µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.522098  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:12:18.540618  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.286926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.561845  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.39423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.562156  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:12:18.580873  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.478657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.601489  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.601519  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.601580  108216 httplog.go:90] GET /healthz: (1.763265ms) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:18.601685  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.191072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.601919  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 09:12:18.620767  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.375103ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.621128  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.621151  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.621181  108216 httplog.go:90] GET /healthz: (822.735µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.641164  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.785338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.641386  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:12:18.660899  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.484937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.681928  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.538345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.682292  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:12:18.700542  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.700579  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.700683  108216 httplog.go:90] GET /healthz: (872.241µs) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:18.700698  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.361636ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.721352  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.721391  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.721673  108216 httplog.go:90] GET /healthz: (1.182702ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.722308  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.868652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.722585  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:12:18.740674  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.325458ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.761538  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.159502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.761778  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:12:18.780738  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.308941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.801965  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.802170  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.802088  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.633936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.802654  108216 httplog.go:90] GET /healthz: (2.534565ms) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:18.802714  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:12:18.820574  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.15647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.821443  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.821551  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.821732  108216 httplog.go:90] GET /healthz: (1.300643ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.841330  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.000843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.841598  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:12:18.860562  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.181018ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.881579  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.169172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.881983  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:12:18.900868  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.901090  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.901279  108216 httplog.go:90] GET /healthz: (1.46642ms) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:18.901123  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.749937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.921084  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:18.921287  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:18.921486  108216 httplog.go:90] GET /healthz: (1.170087ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:18.921433  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.06752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.921928  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:12:18.940570  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.162833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.961149  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.69697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:18.961420  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:12:18.980516  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.19803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.001215  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.001246  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.001284  108216 httplog.go:90] GET /healthz: (1.044814ms) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:19.001827  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.476036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.002053  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:12:19.020486  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.167649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.021322  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.021351  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.021397  108216 httplog.go:90] GET /healthz: (758.145µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.041045  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.667015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.041405  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:12:19.060340  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.014746ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.081304  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.943497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.081660  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:12:19.100695  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.259573ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.101198  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.101221  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.101255  108216 httplog.go:90] GET /healthz: (1.459041ms) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:19.121237  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.121272  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.121307  108216 httplog.go:90] GET /healthz: (857.307µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.121667  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.158577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.121844  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:12:19.140601  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.227728ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.161469  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.982843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.161776  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:12:19.180577  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.170248ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.201186  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.818649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.201192  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.201352  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.201403  108216 httplog.go:90] GET /healthz: (1.520537ms) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:19.201597  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:12:19.220678  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.306386ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.221385  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.221439  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.221545  108216 httplog.go:90] GET /healthz: (1.0073ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.243227  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.383196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.243505  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:12:19.260793  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.398654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.281001  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.676739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.282224  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:12:19.300416  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.052139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.300875  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.300900  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.300932  108216 httplog.go:90] GET /healthz: (888.668µs) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:19.321541  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.097974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.321986  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:12:19.322829  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.322851  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.322882  108216 httplog.go:90] GET /healthz: (2.384498ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.340481  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.117973ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.361283  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.953548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.361522  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:12:19.380457  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.082775ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.400799  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.400843  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.400896  108216 httplog.go:90] GET /healthz: (1.024197ms) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:19.401343  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.975111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.401658  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:12:19.421202  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.82631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.421262  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.421304  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.421332  108216 httplog.go:90] GET /healthz: (890.214µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.441127  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.767397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.441353  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:12:19.460407  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (998.6µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.481062  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.708356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.481354  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:12:19.500312  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.014589ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.500508  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.500665  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.500713  108216 httplog.go:90] GET /healthz: (855.982µs) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:19.521093  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.521128  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.521159  108216 httplog.go:90] GET /healthz: (801.894µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.521337  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.005907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.521749  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:12:19.540753  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.41015ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.561209  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.819994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.561458  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:12:19.580486  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.117911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.601070  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.601103  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.601200  108216 httplog.go:90] GET /healthz: (1.390608ms) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:19.601415  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.120721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.601744  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:12:19.620443  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.066511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.621055  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.621197  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.621380  108216 httplog.go:90] GET /healthz: (1.058354ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.641046  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.670604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.641309  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:12:19.660794  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.355498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.662797  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.510984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.681299  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.903048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.681599  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 09:12:19.700754  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.355578ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.700924  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.700954  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.700984  108216 httplog.go:90] GET /healthz: (1.217371ms) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:19.702725  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.195351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.721477  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.721608  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.721752  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.339173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.721795  108216 httplog.go:90] GET /healthz: (1.265556ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.721983  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:12:19.740566  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.231797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.742426  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.407988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.762084  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.558791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.762335  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:12:19.780824  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.436783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.782730  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.432261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.800683  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.800712  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.800748  108216 httplog.go:90] GET /healthz: (933.893µs) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:19.801741  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.405287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.802011  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:12:19.820809  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.332543ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.821135  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.821159  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.821194  108216 httplog.go:90] GET /healthz: (858.711µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.822580  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.204648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.841259  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.931558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.841503  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:12:19.860732  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.385485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.862436  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.188226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.881095  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.758443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.881425  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:12:19.900835  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.900867  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.900904  108216 httplog.go:90] GET /healthz: (1.086409ms) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:19.901036  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.620623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.902916  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.421901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.921104  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.774345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:19.921353  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:19.921381  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:19.921420  108216 httplog.go:90] GET /healthz: (1.047629ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.921580  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:12:19.940622  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.26724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.943323  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.233871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.961353  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.037023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.961791  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 09:12:19.980503  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.129682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:19.982075  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.095386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.001469  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.121156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.001798  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:12:20.001884  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:20.001950  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:20.001996  108216 httplog.go:90] GET /healthz: (2.112904ms) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:20.020363  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.105829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:20.021402  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:20.021431  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:20.021500  108216 httplog.go:90] GET /healthz: (1.049196ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.022483  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.320283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:20.041173  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.78068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:20.041505  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:12:20.060734  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.382565ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:20.062627  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.280063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:20.080667  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.381991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:20.080979  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:12:20.100700  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.252398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:20.100846  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:20.100875  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:20.101064  108216 httplog.go:90] GET /healthz: (1.153641ms) 0 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:20.102480  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.271653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:20.121581  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:20.121617  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:20.121694  108216 httplog.go:90] GET /healthz: (1.310023ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.121732  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.339174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I0919 09:12:20.121962  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:12:20.140514  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.165874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.142255  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.218781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.162041  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.66754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.162295  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:12:20.180744  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.356176ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.182564  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.151008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.201014  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:12:20.201049  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:12:20.201109  108216 httplog.go:90] GET /healthz: (1.277067ms) 0 [Go-http-client/1.1 127.0.0.1:54968]
I0919 09:12:20.201355  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.94346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.201569  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:12:20.221263  108216 httplog.go:90] GET /healthz: (820.945µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.222667  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.003389ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.224404  108216 httplog.go:90] POST /api/v1/namespaces: (1.386603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.225611  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (731.64µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.229305  108216 httplog.go:90] POST /api/v1/namespaces/default/services: (3.25241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.230818  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (855.093µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.231935  108216 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (732.134µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
E0919 09:12:20.232277  108216 controller.go:224] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0919 09:12:20.301021  108216 httplog.go:90] GET /healthz: (1.006405ms) 200 [Go-http-client/1.1 127.0.0.1:55406]
I0919 09:12:20.303867  108216 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.016293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
W0919 09:12:20.304146  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304298  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304361  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304437  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304496  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304540  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304588  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304635  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304727  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304765  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:12:20.304871  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:12:20.306285  108216 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.159297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.308296  108216 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 09:12:20.308417  108216 factory.go:321] Registering predicate: PredicateOne
I0919 09:12:20.308479  108216 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 09:12:20.308512  108216 factory.go:321] Registering predicate: PredicateTwo
I0919 09:12:20.308539  108216 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 09:12:20.308573  108216 factory.go:336] Registering priority: PriorityOne
I0919 09:12:20.308602  108216 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 09:12:20.308636  108216 factory.go:336] Registering priority: PriorityTwo
I0919 09:12:20.308661  108216 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 09:12:20.308670  108216 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 09:12:20.310423  108216 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.366886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
W0919 09:12:20.310714  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:12:20.312160  108216 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (1.018469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.312363  108216 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 09:12:20.312389  108216 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 09:12:20.312400  108216 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 09:12:20.312406  108216 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 09:12:20.313896  108216 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.096956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
W0919 09:12:20.314125  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:12:20.315168  108216 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (797.674µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.315386  108216 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 09:12:20.315409  108216 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 09:12:20.316857  108216 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.131627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
W0919 09:12:20.317091  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:12:20.318049  108216 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (719.337µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.318400  108216 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 09:12:20.318428  108216 factory.go:321] Registering predicate: PredicateOne
I0919 09:12:20.318435  108216 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 09:12:20.318441  108216 factory.go:321] Registering predicate: PredicateTwo
I0919 09:12:20.318446  108216 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 09:12:20.318456  108216 factory.go:336] Registering priority: PriorityOne
I0919 09:12:20.318463  108216 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 09:12:20.318472  108216 factory.go:336] Registering priority: PriorityTwo
I0919 09:12:20.318478  108216 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 09:12:20.318485  108216 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 09:12:20.320161  108216 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.342156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
W0919 09:12:20.320429  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:12:20.321498  108216 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (793.546µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.321793  108216 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 09:12:20.321818  108216 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 09:12:20.321829  108216 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 09:12:20.321834  108216 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 09:12:20.501826  108216 request.go:538] Throttling request took 179.730765ms, request: POST:http://127.0.0.1:42969/api/v1/namespaces/kube-system/configmaps
I0919 09:12:20.504169  108216 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.097037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
W0919 09:12:20.504569  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:12:20.701825  108216 request.go:538] Throttling request took 197.042633ms, request: GET:http://127.0.0.1:42969/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I0919 09:12:20.704229  108216 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (2.131844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.704877  108216 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 09:12:20.704914  108216 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 09:12:20.901898  108216 request.go:538] Throttling request took 196.625194ms, request: DELETE:http://127.0.0.1:42969/api/v1/nodes
I0919 09:12:20.903607  108216 httplog.go:90] DELETE /api/v1/nodes: (1.43721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
I0919 09:12:20.904030  108216 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 09:12:20.905433  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.098315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55406]
--- FAIL: TestSchedulerCreationFromConfigMap (4.18s)
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-090243.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m20s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0919 09:13:12.089923  108216 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (140.30s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-090243.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0919 09:14:22.364835  108216 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 09:14:22.364926  108216 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 09:14:22.364957  108216 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 09:14:22.364985  108216 master.go:259] Using reconciler: 
I0919 09:14:22.366313  108216 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.366632  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.366839  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.367692  108216 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 09:14:22.367735  108216 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.367801  108216 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 09:14:22.368076  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.368106  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.368863  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.369002  108216 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:14:22.369052  108216 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.369104  108216 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:14:22.369327  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.369354  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.369768  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.370092  108216 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 09:14:22.370138  108216 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.370166  108216 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 09:14:22.370370  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.370414  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.371070  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.371313  108216 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 09:14:22.371409  108216 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 09:14:22.371553  108216 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.371846  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.371867  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.372220  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.373125  108216 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 09:14:22.373225  108216 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 09:14:22.373271  108216 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.373448  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.373477  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.373829  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.374697  108216 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 09:14:22.374776  108216 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 09:14:22.375583  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.376096  108216 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.376452  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.376544  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.377237  108216 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 09:14:22.377387  108216 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.377849  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.377971  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.377458  108216 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 09:14:22.378709  108216 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 09:14:22.378829  108216 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 09:14:22.378881  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.379072  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.379099  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.382879  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.382879  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.383324  108216 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 09:14:22.383522  108216 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.383635  108216 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 09:14:22.384021  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.384042  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.384758  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.384850  108216 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 09:14:22.384962  108216 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 09:14:22.384982  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.385210  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.385249  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.385758  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.386153  108216 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 09:14:22.386300  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.386491  108216 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 09:14:22.386503  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.386719  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.387391  108216 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 09:14:22.387458  108216 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 09:14:22.387605  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.387617  108216 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.387819  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.387847  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.388667  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.388681  108216 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 09:14:22.388844  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.388745  108216 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 09:14:22.389133  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.389160  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.389974  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.390563  108216 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 09:14:22.390607  108216 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 09:14:22.390801  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.391532  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.393769  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.393856  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.394760  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.394794  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.395614  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.395851  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.395874  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.396574  108216 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 09:14:22.396600  108216 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 09:14:22.396602  108216 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 09:14:22.396962  108216 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.397154  108216 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.397881  108216 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.397990  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.398436  108216 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.398963  108216 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.399559  108216 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.400032  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.400167  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.400387  108216 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.400899  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.401392  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.401571  108216 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.402172  108216 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.402464  108216 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.402908  108216 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.403105  108216 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.403602  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.403810  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.403941  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.404062  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.404242  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.404380  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.404542  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.405108  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.405350  108216 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.405939  108216 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.406540  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.406802  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.407031  108216 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.407612  108216 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.407904  108216 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.408459  108216 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.408987  108216 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.409455  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.410015  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.410226  108216 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.410346  108216 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 09:14:22.410370  108216 master.go:461] Enabling API group "authentication.k8s.io".
I0919 09:14:22.410389  108216 master.go:461] Enabling API group "authorization.k8s.io".
I0919 09:14:22.410546  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.410802  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.410836  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.411996  108216 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:14:22.412100  108216 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:14:22.412172  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.412389  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.412412  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.413104  108216 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:14:22.413150  108216 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:14:22.413226  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.413259  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.413485  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.413519  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.413882  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.414167  108216 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:14:22.414200  108216 master.go:461] Enabling API group "autoscaling".
I0919 09:14:22.414213  108216 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:14:22.414347  108216 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.414526  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.414559  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.415169  108216 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 09:14:22.415256  108216 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 09:14:22.415333  108216 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.415537  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.415563  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.416205  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.416251  108216 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 09:14:22.416273  108216 master.go:461] Enabling API group "batch".
I0919 09:14:22.416303  108216 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 09:14:22.416397  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.416412  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.416674  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.416700  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.417260  108216 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 09:14:22.417346  108216 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 09:14:22.417269  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.417668  108216 master.go:461] Enabling API group "certificates.k8s.io".
I0919 09:14:22.417861  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.418250  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.418323  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.418379  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.419591  108216 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:14:22.419795  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.419972  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.420098  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.420015  108216 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:14:22.421115  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.421394  108216 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:14:22.421446  108216 master.go:461] Enabling API group "coordination.k8s.io".
I0919 09:14:22.421460  108216 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 09:14:22.421530  108216 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:14:22.421839  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.422030  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.422057  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.422345  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.422636  108216 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:14:22.422693  108216 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:14:22.422705  108216 master.go:461] Enabling API group "extensions".
I0919 09:14:22.422877  108216 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.423119  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.423147  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.423809  108216 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 09:14:22.423874  108216 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 09:14:22.423959  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.423972  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.424179  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.424212  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.424483  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.425171  108216 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:14:22.425196  108216 master.go:461] Enabling API group "networking.k8s.io".
I0919 09:14:22.425216  108216 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:14:22.425227  108216 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.425394  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.425416  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.425996  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.426060  108216 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 09:14:22.426039  108216 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 09:14:22.426141  108216 master.go:461] Enabling API group "node.k8s.io".
I0919 09:14:22.426288  108216 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.426502  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.426522  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.426587  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.427210  108216 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 09:14:22.427356  108216 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 09:14:22.427625  108216 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.427913  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.428172  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.428339  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.429363  108216 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 09:14:22.429396  108216 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 09:14:22.429399  108216 master.go:461] Enabling API group "policy".
I0919 09:14:22.429524  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.429754  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.429794  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.430350  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.430356  108216 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:14:22.430376  108216 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:14:22.430806  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.431115  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.431218  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.431434  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.431819  108216 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:14:22.431956  108216 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:14:22.432073  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.432352  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.432430  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.432703  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.433112  108216 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:14:22.433149  108216 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:14:22.433277  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.433506  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.433543  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.433922  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.434241  108216 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:14:22.434285  108216 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:14:22.434293  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.434547  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.434572  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.435187  108216 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:14:22.435229  108216 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:14:22.435326  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.435337  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.435575  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.435598  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.436190  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.436866  108216 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:14:22.436921  108216 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:14:22.436969  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.437580  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.437613  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.437754  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.438452  108216 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:14:22.438503  108216 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:14:22.438612  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.438838  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.438870  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.439239  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.439438  108216 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:14:22.439469  108216 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:14:22.439474  108216 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 09:14:22.440237  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.441743  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.442027  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.442061  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.442929  108216 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:14:22.442976  108216 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:14:22.443040  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.443216  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.443256  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.444179  108216 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:14:22.444331  108216 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 09:14:22.444488  108216 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 09:14:22.444207  108216 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:14:22.444266  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.444800  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.445012  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.445068  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.445199  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.445708  108216 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:14:22.445759  108216 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:14:22.445862  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.446058  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.446087  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.446582  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.446810  108216 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:14:22.446854  108216 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:14:22.446849  108216 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.447020  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.447045  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.447884  108216 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 09:14:22.447915  108216 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.447976  108216 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 09:14:22.448019  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.448122  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.448150  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.448808  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.449506  108216 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 09:14:22.449536  108216 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 09:14:22.449701  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.449913  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.449942  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.450314  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.450526  108216 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:14:22.450692  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.450705  108216 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:14:22.450858  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.450871  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.451354  108216 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:14:22.451374  108216 master.go:461] Enabling API group "storage.k8s.io".
I0919 09:14:22.451392  108216 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:14:22.451520  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.451782  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.451880  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.451903  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.452571  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.452844  108216 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 09:14:22.452932  108216 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 09:14:22.453133  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.453422  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.453507  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.456278  108216 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 09:14:22.456358  108216 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 09:14:22.456426  108216 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.456661  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.456687  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.456949  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.457348  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.457420  108216 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 09:14:22.457554  108216 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 09:14:22.457560  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.457856  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.457881  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.458450  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.458502  108216 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 09:14:22.458551  108216 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 09:14:22.458788  108216 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.459109  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.459166  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.459296  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.460019  108216 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 09:14:22.460046  108216 master.go:461] Enabling API group "apps".
I0919 09:14:22.460073  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.460095  108216 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 09:14:22.460301  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.460346  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.460937  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.461091  108216 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:14:22.461177  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.461218  108216 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:14:22.461389  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.461408  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.462105  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.462167  108216 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:14:22.462136  108216 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:14:22.462317  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.462498  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.462518  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.463153  108216 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:14:22.463279  108216 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:14:22.463830  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.463888  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.464008  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.464021  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.464212  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.464948  108216 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:14:22.465040  108216 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 09:14:22.465121  108216 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.464992  108216 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:14:22.465608  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:22.465716  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:22.466382  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.466509  108216 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:14:22.466867  108216 master.go:461] Enabling API group "events.k8s.io".
I0919 09:14:22.466561  108216 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:14:22.467326  108216 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.467719  108216 watch_cache.go:405] Replace watchCache (rev: 59892) 
I0919 09:14:22.467828  108216 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.468183  108216 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.468505  108216 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.468742  108216 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.468956  108216 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.469214  108216 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.469431  108216 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.469597  108216 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.469779  108216 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.470769  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.471075  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.471858  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.472133  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.472794  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.473070  108216 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.473748  108216 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.474100  108216 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.474836  108216 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.475148  108216 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:14:22.475260  108216 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 09:14:22.475816  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.476018  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.476265  108216 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.477005  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.477768  108216 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.478414  108216 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.478713  108216 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.479613  108216 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.480360  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.480702  108216 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.481329  108216 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:14:22.481439  108216 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 09:14:22.482179  108216 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.482529  108216 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.483091  108216 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.483677  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.484175  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.484781  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.485574  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.486273  108216 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.486792  108216 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.487401  108216 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.487995  108216 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:14:22.488100  108216 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 09:14:22.488709  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.489221  108216 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:14:22.489338  108216 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 09:14:22.489846  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.490419  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.490729  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.491202  108216 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.491611  108216 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.492077  108216 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.492626  108216 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:14:22.492738  108216 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 09:14:22.493401  108216 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.493975  108216 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.494340  108216 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.494933  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.495233  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.495486  108216 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.496084  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.496439  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.496812  108216 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.497680  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.498073  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.498422  108216 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:14:22.498543  108216 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 09:14:22.498605  108216 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 09:14:22.499327  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.500083  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.501042  108216 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.501825  108216 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.502824  108216 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ba50883f-f59e-426b-8e49-ff431c1f1cb2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:14:22.507082  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.507120  108216 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 09:14:22.507128  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.507135  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.507141  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.507148  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.507184  108216 httplog.go:90] GET /healthz: (193.241µs) 0 [Go-http-client/1.1 127.0.0.1:51240]
I0919 09:14:22.508348  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.345943ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51242]
I0919 09:14:22.511302  108216 httplog.go:90] GET /api/v1/services: (1.387667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51242]
I0919 09:14:22.515932  108216 httplog.go:90] GET /api/v1/services: (1.212469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51242]
I0919 09:14:22.518016  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.518043  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.518053  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.518059  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.518065  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.518084  108216 httplog.go:90] GET /healthz: (158.073µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51242]
I0919 09:14:22.519419  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.447492ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51240]
I0919 09:14:22.519535  108216 httplog.go:90] GET /api/v1/services: (1.032387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51242]
I0919 09:14:22.519734  108216 httplog.go:90] GET /api/v1/services: (777.246µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:22.521147  108216 httplog.go:90] POST /api/v1/namespaces: (1.243346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51242]
I0919 09:14:22.522130  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (668.261µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:22.523853  108216 httplog.go:90] POST /api/v1/namespaces: (1.172745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:22.525032  108216 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (771.921µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:22.526428  108216 httplog.go:90] POST /api/v1/namespaces: (1.020719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:22.608115  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.608171  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.608185  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.608195  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.608205  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.608243  108216 httplog.go:90] GET /healthz: (269.436µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:22.618988  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.619017  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.619029  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.619035  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.619042  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.619070  108216 httplog.go:90] GET /healthz: (211.955µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:22.708143  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.708348  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.708419  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.708461  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.708494  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.708634  108216 httplog.go:90] GET /healthz: (641.431µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:22.719007  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.719244  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.719296  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.719342  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.719392  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.719572  108216 httplog.go:90] GET /healthz: (735.237µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:22.788404  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.788501  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.788515  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.788881  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.788965  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.789055  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.808072  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.808117  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.808130  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.808140  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.808149  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.808185  108216 httplog.go:90] GET /healthz: (268.763µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:22.818982  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.819015  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.819026  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.819033  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.819038  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.819094  108216 httplog.go:90] GET /healthz: (248.109µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:22.908048  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.908148  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.908159  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.908165  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.908171  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.908362  108216 httplog.go:90] GET /healthz: (431.192µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:22.919011  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:22.919165  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:22.919197  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:22.919224  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:22.919252  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:22.919382  108216 httplog.go:90] GET /healthz: (503.737µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:22.932101  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.932588  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.934552  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.934589  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.934766  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.938109  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:22.992197  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.008161  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:23.008375  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.008421  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.008480  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.008530  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.008714  108216 httplog.go:90] GET /healthz: (715.84µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:23.019025  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:23.019205  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.019266  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.019312  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.019354  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.019505  108216 httplog.go:90] GET /healthz: (629.922µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.108195  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:23.108389  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.108446  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.108487  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.108530  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.108684  108216 httplog.go:90] GET /healthz: (643.712µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:23.118985  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:23.119022  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.119035  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.119045  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.119052  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.119091  108216 httplog.go:90] GET /healthz: (258.407µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.129172  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.208209  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:23.208258  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.208271  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.208281  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.208290  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.208349  108216 httplog.go:90] GET /healthz: (296.865µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:23.219008  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:23.219045  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.219058  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.219067  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.219075  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.219125  108216 httplog.go:90] GET /healthz: (274.231µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.308206  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:23.308244  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.308256  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.308265  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.308275  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.308326  108216 httplog.go:90] GET /healthz: (262.259µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:23.319002  108216 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:14:23.319041  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.319052  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.319061  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.319071  108216 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.319106  108216 httplog.go:90] GET /healthz: (251.496µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.326463  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.326619  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.326623  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.326666  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.327627  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.328596  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.328676  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.364993  108216 client.go:361] parsed scheme: "endpoint"
I0919 09:14:23.365142  108216 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:14:23.409034  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.409066  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.409077  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.409085  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.409131  108216 httplog.go:90] GET /healthz: (1.161729ms) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:23.419726  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.419755  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.419764  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.419769  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.419808  108216 httplog.go:90] GET /healthz: (964.519µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.508464  108216 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.390804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.508535  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.427551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51240]
I0919 09:14:23.508557  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.300986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51248]
I0919 09:14:23.509221  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.509245  108216 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:14:23.509254  108216 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:14:23.509264  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:14:23.509335  108216 httplog.go:90] GET /healthz: (969.776µs) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:23.510050  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.079636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51248]
I0919 09:14:23.510062  108216 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (944.538µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51252]
I0919 09:14:23.510507  108216 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.530258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.510677  108216 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 09:14:23.511698  108216 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (858.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.511971  108216 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.469021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51252]
I0919 09:14:23.512206  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.707767ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.513425  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (831.814µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51252]
I0919 09:14:23.513435  108216 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.346799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.513610  108216 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 09:14:23.513631  108216 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 09:14:23.514721  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (852.485µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.515794  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (692.829µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.516950  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (702.791µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.518051  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (824.872µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.519186  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (719.784µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.519428  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.519449  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:23.519581  108216 httplog.go:90] GET /healthz: (856.931µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.520237  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (761.315µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.521916  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.363542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.522213  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 09:14:23.523118  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (640.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.524587  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.09725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.524795  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 09:14:23.525792  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (723.824µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.527378  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.527520  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 09:14:23.528427  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (743.112µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.530200  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.264061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.530357  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:14:23.531380  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (856.634µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.533217  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.384957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.533426  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 09:14:23.534301  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (693.488µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.535903  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.269872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.536116  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 09:14:23.537046  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (707.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.538911  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.431684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.539126  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 09:14:23.540047  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (723.315µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.541800  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.343081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.541970  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 09:14:23.542972  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (812.307µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.545157  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.626142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.545424  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 09:14:23.546437  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (825.924µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.548618  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.833608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.548955  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 09:14:23.549833  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (725.25µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.551341  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.083062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.551551  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 09:14:23.552369  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (624.19µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.554146  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.554485  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 09:14:23.555365  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (663.457µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.556769  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.076002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.557081  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 09:14:23.557916  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (636.821µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.559402  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.147018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.559558  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 09:14:23.560280  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (600.043µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.561684  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.065072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.561878  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 09:14:23.562735  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (680.263µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.564234  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.17309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.564411  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 09:14:23.565241  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (675.169µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.566760  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.192673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.566944  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 09:14:23.567801  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (687.459µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.569251  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.092858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.569428  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:14:23.570313  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (684.558µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.571634  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (987.051µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.571825  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 09:14:23.572626  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (628.208µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.574182  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.146032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.574408  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 09:14:23.575230  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (654.466µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.576949  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.327449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.577146  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 09:14:23.578037  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (740.316µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.579474  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.08643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.579747  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 09:14:23.580667  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (744.898µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.582351  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.185889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.582624  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 09:14:23.583540  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (649.283µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.585160  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.214056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.585444  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:14:23.586294  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (625.21µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.587837  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.125268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.588052  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 09:14:23.588874  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (615.577µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.590580  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.326483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.590939  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:14:23.592026  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (765.209µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.594294  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.720485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.594555  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 09:14:23.595543  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (788.786µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.597555  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.512843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.597879  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:14:23.598763  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (692.337µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.600238  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.166982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.600423  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:14:23.601354  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (739.632µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.602880  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.190662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.603122  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:14:23.604124  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (776.715µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.605971  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.231208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.606293  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:14:23.607251  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (731.705µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.608536  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.608616  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:23.608792  108216 httplog.go:90] GET /healthz: (995.354µs) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:23.608952  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.306368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.609143  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:14:23.610070  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (750.954µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.611804  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.381578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.612060  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:14:23.613132  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (815.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.615014  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.3824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.615292  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:14:23.616382  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (822.283µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.618078  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.328804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.618307  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:14:23.619321  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (813.864µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.619529  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.619554  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:23.619581  108216 httplog.go:90] GET /healthz: (754.651µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.621289  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.474772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.621546  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:14:23.622468  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (734.922µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.623946  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.0644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.624247  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:14:23.625064  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (627.819µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.626914  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.510685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.627119  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:14:23.628077  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (741.135µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.629969  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.533626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.630236  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:14:23.631144  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (769.393µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.632872  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.232039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.633108  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:14:23.633990  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (713.037µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.635901  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.414352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.636131  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:14:23.637021  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (720.321µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.638858  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.40488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.639175  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:14:23.640318  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (858.43µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.642536  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.709879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.642772  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:14:23.643865  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (842.652µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.645831  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.523165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.646108  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:14:23.647128  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (770.178µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.649222  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.613689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.649558  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:14:23.650334  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (610.325µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.652151  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.430184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.652305  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:14:23.653292  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (815.643µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.654843  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.247938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.654979  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:14:23.655846  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (690.413µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.659603  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.384953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.659780  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:14:23.660653  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (713.704µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.662377  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.339899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.662714  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:14:23.667977  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (877.546µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.690235  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.986317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.690790  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:14:23.708488  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.246826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.709355  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.709384  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:23.709412  108216 httplog.go:90] GET /healthz: (972.397µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:23.720016  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.720201  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:23.720373  108216 httplog.go:90] GET /healthz: (1.610852ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.732270  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.757117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.732650  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:14:23.748539  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.290497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.769322  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.105476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.769749  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:14:23.788567  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.788838  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.788696  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.40743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.788884  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.789024  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.789154  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.789163  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.809263  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.895729ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.809278  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.809301  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:23.809342  108216 httplog.go:90] GET /healthz: (1.384759ms) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:23.809493  108216 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:14:23.819785  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.819938  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:23.820082  108216 httplog.go:90] GET /healthz: (1.201226ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.830047  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (2.827194ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.849465  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.231353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.849752  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 09:14:23.868634  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.394673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.888950  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.763283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.889297  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 09:14:23.908534  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.251773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:23.908834  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.908901  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:23.909004  108216 httplog.go:90] GET /healthz: (1.083756ms) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:23.919869  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:23.920024  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:23.920130  108216 httplog.go:90] GET /healthz: (1.285204ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.930906  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.911686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.931181  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 09:14:23.932312  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.932760  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.934757  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.934787  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.934922  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.938323  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:23.948670  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.294696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.969501  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.165229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.969798  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:14:23.988666  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.242283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:23.992384  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.009421  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.009612  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.009834  108216 httplog.go:90] GET /healthz: (1.923185ms) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:24.009735  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.498712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.010130  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 09:14:24.019889  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.019913  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.019996  108216 httplog.go:90] GET /healthz: (1.170789ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.028509  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.233494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.049543  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.235274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.049850  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:14:24.068210  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (991.772µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.089132  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.934043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.089454  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 09:14:24.109083  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.109119  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.243454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.109130  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.109165  108216 httplog.go:90] GET /healthz: (1.276869ms) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:24.119580  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.119610  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.119682  108216 httplog.go:90] GET /healthz: (935.301µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.128959  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.75632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.129143  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:14:24.129346  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.148923  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.644386ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.169448  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.113613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.169748  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:14:24.188913  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.580361ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.209401  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.087055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.209666  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 09:14:24.209704  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.209722  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.209756  108216 httplog.go:90] GET /healthz: (1.259699ms) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:24.220136  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.220272  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.220411  108216 httplog.go:90] GET /healthz: (1.554336ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.228754  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.446738ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.250620  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.393146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.251058  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:14:24.268408  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.204943ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.289187  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.958532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.289616  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:14:24.309089  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.829593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.309611  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.309873  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.310037  108216 httplog.go:90] GET /healthz: (1.219499ms) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:24.320156  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.320218  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.320283  108216 httplog.go:90] GET /healthz: (1.492078ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.326627  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.326788  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.326789  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.326795  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.327778  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.328740  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.328778  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.329087  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.940781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.329340  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:14:24.348539  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.222922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.371370  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.135065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.371754  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:14:24.388530  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.210108ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.409046  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.409075  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.409114  108216 httplog.go:90] GET /healthz: (1.251652ms) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:24.409511  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.219082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.409806  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:14:24.420009  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.420045  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.420094  108216 httplog.go:90] GET /healthz: (1.162046ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.428255  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.047692ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.449371  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.094575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.449739  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:14:24.468691  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.387695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.489547  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.248047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.489915  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:14:24.508534  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.28124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.509278  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.509307  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.509342  108216 httplog.go:90] GET /healthz: (1.358902ms) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:24.519696  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.519725  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.519773  108216 httplog.go:90] GET /healthz: (1.014865ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.528995  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.739089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.529243  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:14:24.549535  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.367631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.570738  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.518499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.571080  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:14:24.588531  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.207525ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.609373  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.041737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.609622  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.609761  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.609930  108216 httplog.go:90] GET /healthz: (2.026488ms) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:24.609681  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:14:24.620992  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.621022  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.621070  108216 httplog.go:90] GET /healthz: (1.115557ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.628550  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.327997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.650063  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.649033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.650331  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:14:24.668766  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.375287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.689852  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.551145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.690216  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:14:24.708727  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.708765  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.708801  108216 httplog.go:90] GET /healthz: (906.074µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:24.708814  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.502606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.720001  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.720132  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.720230  108216 httplog.go:90] GET /healthz: (1.394457ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.728918  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.703375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.729210  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:14:24.748581  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.298448ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.769115  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.850961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.769479  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:14:24.788426  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.200391ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.789002  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.789017  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.789020  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.789183  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.789303  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.789325  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.808959  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.809000  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.809040  108216 httplog.go:90] GET /healthz: (1.153884ms) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:24.809492  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.184784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.809767  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:14:24.819696  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.819730  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.819761  108216 httplog.go:90] GET /healthz: (968.193µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.828615  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.444901ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.849156  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.893277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.849417  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:14:24.868449  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.15902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.889202  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.996919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.889594  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:14:24.908241  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.02712ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:24.909083  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.909203  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.909245  108216 httplog.go:90] GET /healthz: (1.262184ms) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:24.919754  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:24.919790  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:24.919838  108216 httplog.go:90] GET /healthz: (1.015182ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.929258  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.991827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.929669  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:14:24.932468  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.932948  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.934966  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.935067  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.935082  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.938582  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:24.948636  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.344429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.969442  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.105426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.969845  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:14:24.988698  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.411608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:24.992867  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.008937  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.008985  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.009022  108216 httplog.go:90] GET /healthz: (1.157805ms) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:25.009222  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.931388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.009500  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:14:25.019779  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.019809  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.019846  108216 httplog.go:90] GET /healthz: (1.092845ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.028417  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.121994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.049516  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.2429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.049849  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:14:25.068390  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.117015ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.090338  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.162873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.090658  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:14:25.108528  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.309226ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.108873  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.108899  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.108944  108216 httplog.go:90] GET /healthz: (1.033674ms) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:25.119942  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.119969  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.120055  108216 httplog.go:90] GET /healthz: (1.180961ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.129149  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.949471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.129398  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:14:25.129580  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.148756  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.507291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.169992  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.650867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.170287  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:14:25.188479  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.229953ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.208883  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.208915  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.208950  108216 httplog.go:90] GET /healthz: (1.161096ms) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:25.209327  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.10555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.209542  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:14:25.219937  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.219967  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.220011  108216 httplog.go:90] GET /healthz: (1.154468ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.229223  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.345909ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.249248  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.997479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.249467  108216 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:14:25.268564  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.318203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.270403  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.265495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.290060  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.719069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.290310  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 09:14:25.308593  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.30312ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.308804  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.308831  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.308865  108216 httplog.go:90] GET /healthz: (941.11µs) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:25.310549  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.169522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.319784  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.319815  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.319853  108216 httplog.go:90] GET /healthz: (1.091799ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.326922  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.326990  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.327036  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.327137  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.327939  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.328887  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.328999  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.329159  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.870128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.329388  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:14:25.348674  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.344096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.350563  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.42387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.369319  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.079574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.369523  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:14:25.388820  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.311396ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.390826  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.456499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.408884  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.408925  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.408964  108216 httplog.go:90] GET /healthz: (1.072601ms) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:25.409208  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.987056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.409404  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:14:25.419784  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.419819  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.419863  108216 httplog.go:90] GET /healthz: (1.027534ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.428383  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.17479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.430269  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.396923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.449106  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.891924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.449434  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:14:25.468618  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.241563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.470379  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.296985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.489292  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.961531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.489595  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:14:25.508512  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.246602ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.508690  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.508742  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.508829  108216 httplog.go:90] GET /healthz: (966.361µs) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:25.510227  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.169803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.519676  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.519706  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.519747  108216 httplog.go:90] GET /healthz: (930.923µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.531540  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (4.372932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.531830  108216 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:14:25.548833  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.484839ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.550848  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.385319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.569442  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.093045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.569862  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 09:14:25.587813  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.624474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44250]
I0919 09:14:25.588500  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.094779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.589476  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.069199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44250]
I0919 09:14:25.590113  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.154815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.590841  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (915.156µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44250]
I0919 09:14:25.608937  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.608981  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.609038  108216 httplog.go:90] GET /healthz: (1.13077ms) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:25.609745  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.403508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.610135  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:14:25.620238  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.620453  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.620660  108216 httplog.go:90] GET /healthz: (1.721076ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.628604  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.344831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.630591  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.410254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.649321  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.091228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.649637  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:14:25.668593  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.310349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.670438  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.282557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.689346  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.066212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.689689  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:14:25.708574  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.336831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.708730  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.708750  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.708781  108216 httplog.go:90] GET /healthz: (914.842µs) 0 [Go-http-client/1.1 127.0.0.1:51250]
I0919 09:14:25.710238  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.122116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.719508  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.719533  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.719561  108216 httplog.go:90] GET /healthz: (828.726µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.728814  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.635361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.729098  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:14:25.748556  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.328109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.750279  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.177172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.769414  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.098117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.769718  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:14:25.788417  108216 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.210494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.789236  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.789274  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.789295  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.789337  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.789580  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.789759  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.790126  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.288282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.809240  108216 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:14:25.809379  108216 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:14:25.809383  108216 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.142455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:25.809521  108216 httplog.go:90] GET /healthz: (1.671205ms) 0 [Go-http-client/1.1 127.0.0.1:51244]
I0919 09:14:25.809883  108216 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:14:25.819554  108216 httplog.go:90] GET /healthz: (796.232µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.821016  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.046122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.822852  108216 httplog.go:90] POST /api/v1/namespaces: (1.450223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.824110  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (770.908µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.827431  108216 httplog.go:90] POST /api/v1/namespaces/default/services: (2.934309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.828621  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (826.137µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.830172  108216 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.207882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.909162  108216 httplog.go:90] GET /healthz: (1.131844ms) 200 [Go-http-client/1.1 127.0.0.1:51244]
W0919 09:14:25.910313  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.910473  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.910574  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.910667  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.910790  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.910889  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.910991  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.911081  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.911156  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.911212  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.911280  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:25.911381  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:14:25.911470  108216 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 09:14:25.911535  108216 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 09:14:25.911795  108216 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 09:14:25.912055  108216 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 09:14:25.912082  108216 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 09:14:25.913016  108216 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (620.423µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51244]
I0919 09:14:25.913899  108216 get.go:251] Starting watch for /api/v1/pods, rv=59892 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m12s
I0919 09:14:25.932727  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.933281  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.935166  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.935226  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.935270  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.938741  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:25.993082  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.012072  108216 shared_informer.go:227] caches populated
I0919 09:14:26.012105  108216 shared_informer.go:204] Caches are synced for scheduler 
I0919 09:14:26.012630  108216 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.012676  108216 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.012680  108216 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.012691  108216 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.012691  108216 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.012701  108216 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.012721  108216 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.012708  108216 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.013159  108216 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.013255  108216 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.013188  108216 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.013368  108216 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.012906  108216 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.013584  108216 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.013006  108216 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.013795  108216 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.013904  108216 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (772.047µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:26.013972  108216 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (583.27µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51258]
I0919 09:14:26.013004  108216 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.013999  108216 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.012632  108216 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.014176  108216 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.014307  108216 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (886.006µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51256]
I0919 09:14:26.013980  108216 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (572.282µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51254]
I0919 09:14:26.014383  108216 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (442.514µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51264]
I0919 09:14:26.014510  108216 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (613.334µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51262]
I0919 09:14:26.014604  108216 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (343.227µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51250]
I0919 09:14:26.014818  108216 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59892 labels= fields= timeout=8m6s
I0919 09:14:26.014917  108216 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (709.998µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51266]
I0919 09:14:26.015195  108216 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59892 labels= fields= timeout=6m8s
I0919 09:14:26.015238  108216 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59892 labels= fields= timeout=8m23s
I0919 09:14:26.015282  108216 get.go:251] Starting watch for /api/v1/nodes, rv=59892 labels= fields= timeout=9m0s
I0919 09:14:26.015512  108216 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59892 labels= fields= timeout=5m3s
I0919 09:14:26.015576  108216 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59892 labels= fields= timeout=8m0s
I0919 09:14:26.015592  108216 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59892 labels= fields= timeout=9m32s
I0919 09:14:26.015837  108216 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59892 labels= fields= timeout=7m43s
I0919 09:14:26.015871  108216 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (1.234017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51268]
I0919 09:14:26.015910  108216 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (897.336µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51260]
I0919 09:14:26.016411  108216 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59892 labels= fields= timeout=7m26s
I0919 09:14:26.016455  108216 get.go:251] Starting watch for /api/v1/services, rv=60006 labels= fields= timeout=8m13s
I0919 09:14:26.112339  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112378  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112384  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112388  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112393  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112397  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112401  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112406  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112409  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112446  108216 shared_informer.go:227] caches populated
I0919 09:14:26.112456  108216 shared_informer.go:227] caches populated
I0919 09:14:26.115044  108216 httplog.go:90] POST /api/v1/namespaces: (1.971406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51274]
I0919 09:14:26.115370  108216 node_lifecycle_controller.go:327] Sending events to api server.
I0919 09:14:26.115437  108216 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0919 09:14:26.115475  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:14:26.115567  108216 taint_manager.go:162] Sending events to api server.
I0919 09:14:26.115663  108216 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0919 09:14:26.115697  108216 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0919 09:14:26.115715  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:14:26.115736  108216 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:14:26.115780  108216 node_lifecycle_controller.go:488] Starting node controller
I0919 09:14:26.115807  108216 shared_informer.go:197] Waiting for caches to sync for taint
I0919 09:14:26.115975  108216 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.116004  108216 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.116830  108216 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (576.862µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51274]
I0919 09:14:26.117693  108216 get.go:251] Starting watch for /api/v1/namespaces, rv=60008 labels= fields= timeout=8m59s
I0919 09:14:26.129771  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.215945  108216 shared_informer.go:227] caches populated
I0919 09:14:26.216020  108216 shared_informer.go:227] caches populated
I0919 09:14:26.216025  108216 shared_informer.go:227] caches populated
I0919 09:14:26.216030  108216 shared_informer.go:227] caches populated
I0919 09:14:26.216034  108216 shared_informer.go:227] caches populated
I0919 09:14:26.216039  108216 shared_informer.go:227] caches populated
I0919 09:14:26.216227  108216 shared_informer.go:227] caches populated
I0919 09:14:26.216257  108216 shared_informer.go:227] caches populated
I0919 09:14:26.216264  108216 shared_informer.go:227] caches populated
I0919 09:14:26.216280  108216 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.216291  108216 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.216307  108216 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.216310  108216 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.216367  108216 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.216623  108216 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0919 09:14:26.217553  108216 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (486.017µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51278]
I0919 09:14:26.217596  108216 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (475.396µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51282]
I0919 09:14:26.217665  108216 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (542.664µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51276]
I0919 09:14:26.218114  108216 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59892 labels= fields= timeout=7m57s
I0919 09:14:26.218554  108216 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59892 labels= fields= timeout=8m14s
I0919 09:14:26.218563  108216 get.go:251] Starting watch for /api/v1/pods, rv=59892 labels= fields= timeout=7m26s
I0919 09:14:26.241944  108216 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0919 09:14:26.241982  108216 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0919 09:14:26.242015  108216 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0919 09:14:26.242019  108216 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0919 09:14:26.242040  108216 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0919 09:14:26.242044  108216 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0919 09:14:26.242146  108216 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"c8b0a986-cd21-444f-9852-49fb87b96f39", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0919 09:14:26.242191  108216 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"a517097b-7b97-45af-9506-3f0cf2451093", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0919 09:14:26.242207  108216 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"d820b366-63b0-4d19-a903-f48cdc52e1e4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0919 09:14:26.244599  108216 httplog.go:90] POST /api/v1/namespaces/default/events: (2.112303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:26.246620  108216 httplog.go:90] POST /api/v1/namespaces/default/events: (1.532956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:26.248474  108216 httplog.go:90] POST /api/v1/namespaces/default/events: (1.397843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:26.315991  108216 shared_informer.go:227] caches populated
I0919 09:14:26.316027  108216 shared_informer.go:204] Caches are synced for taint 
I0919 09:14:26.316097  108216 taint_manager.go:186] Starting NoExecuteTaintManager
I0919 09:14:26.316518  108216 shared_informer.go:227] caches populated
I0919 09:14:26.316553  108216 shared_informer.go:227] caches populated
I0919 09:14:26.316560  108216 shared_informer.go:227] caches populated
I0919 09:14:26.316564  108216 shared_informer.go:227] caches populated
I0919 09:14:26.316569  108216 shared_informer.go:227] caches populated
I0919 09:14:26.319366  108216 httplog.go:90] POST /api/v1/nodes: (2.13364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.319775  108216 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0919 09:14:26.319823  108216 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 09:14:26.319840  108216 taint_manager.go:438] Updating known taints on node node-0: []
I0919 09:14:26.321369  108216 httplog.go:90] POST /api/v1/nodes: (1.46522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.321686  108216 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0919 09:14:26.321793  108216 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 09:14:26.321821  108216 taint_manager.go:438] Updating known taints on node node-1: []
I0919 09:14:26.323087  108216 httplog.go:90] POST /api/v1/nodes: (1.291919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.323568  108216 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0919 09:14:26.323581  108216 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 09:14:26.323599  108216 taint_manager.go:438] Updating known taints on node node-2: []
I0919 09:14:26.325012  108216 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/pods: (1.496283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.325390  108216 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630", Name:"testpod-2"}
I0919 09:14:26.325507  108216 scheduling_queue.go:830] About to try and schedule pod taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/testpod-2
I0919 09:14:26.325522  108216 scheduler.go:530] Attempting to schedule pod: taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/testpod-2
I0919 09:14:26.325836  108216 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/testpod-2", node "node-0"
I0919 09:14:26.325855  108216 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/testpod-2", node "node-0": all PVCs bound and nothing to do
I0919 09:14:26.325901  108216 factory.go:606] Attempting to bind testpod-2 to node-0
I0919 09:14:26.327328  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.327333  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.327355  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.327341  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.327571  108216 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/pods/testpod-2/binding: (1.445336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.327779  108216 scheduler.go:662] pod taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/testpod-2 is bound successfully on node "node-0", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0919 09:14:26.327977  108216 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630", Name:"testpod-2"}
I0919 09:14:26.328103  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.329140  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.329149  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.329356  108216 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/events: (1.331106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.427408  108216 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/pods/testpod-2: (1.714076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.429147  108216 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/pods/testpod-2: (1.228926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.430694  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.011606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.432950  108216 httplog.go:90] PUT /api/v1/nodes/node-0/status: (1.792542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.434026  108216 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (388.268µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.436877  108216 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.108748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.437167  108216 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:14:26.433460738 +0000 UTC m=+333.100171651,}] Taint to Node node-0
I0919 09:14:26.437198  108216 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0919 09:14:26.535176  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.556192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.635359  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.714378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.735348  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.763118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.789433  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.789438  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.789460  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.789740  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.789841  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.789937  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.835176  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.625985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.932934  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.933422  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.934983  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.456789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:26.935407  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.935482  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.935488  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.938966  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:26.993431  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.014780  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.014975  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.015070  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.015451  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.015700  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.016292  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.034956  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.39228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.129998  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.135347  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.690785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.218233  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.235235  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.542801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.327529  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.327574  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.327550  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.327555  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.328266  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.329296  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.329305  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.335144  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.611181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.435466  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.855488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.535691  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.051319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.635392  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.662192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.735229  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.546043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.789785  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.789864  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.789868  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.790110  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.789888  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.790159  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.835189  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.63305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.933130  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.933603  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.935316  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.544465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:27.935595  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.935675  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.935690  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.939149  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:27.993750  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.015096  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.015229  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.015265  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.015634  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.015832  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.016490  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.035349  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.716371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.130202  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.135196  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.589573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.218409  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.235034  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.476292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.327719  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.327720  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.327732  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.328034  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.328430  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.329529  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.329541  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.335158  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.535922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.435170  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.587109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.535203  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.503976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.635121  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.431365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.735192  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.588202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.790176  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.790189  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.790252  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.790260  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.790300  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.790518  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.835512  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.833369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.933341  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.934181  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.935631  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.994592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:28.935871  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.935894  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.935874  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.939346  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:28.994060  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.015304  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.015381  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.015548  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.015817  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.016054  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.016717  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.035466  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.733288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.130374  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.135454  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.717414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.214978  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.803545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0919 09:14:29.216958  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.436566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0919 09:14:29.218575  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.218576  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.155059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0919 09:14:29.235375  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.659911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.327944  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.327944  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.327953  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.328167  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.328705  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.329762  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.329763  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.335374  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.686129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.435288  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.724788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.535399  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.789487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.635369  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.676107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.735247  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.666294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.790380  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.790371  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.790407  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.790407  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.790691  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.790463  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.835305  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.693026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.933509  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.934384  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.935142  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.563143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:29.936054  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.936149  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.936155  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.939555  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:29.994248  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.015517  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.015564  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.015734  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.015948  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.016179  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.016980  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.035574  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.91772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.130751  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.135116  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.467348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.218765  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.235320  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.729111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.328210  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.328210  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.328223  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.328368  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.328990  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.330043  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.330056  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.335437  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.869891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.435081  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.496143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.535119  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.511307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.635232  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.62051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.725124  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.359087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:30.726756  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.191784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:30.728133  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (998.458µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:30.734947  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.434486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.790770  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.790868  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.790820  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.790826  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.790853  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.791006  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.835299  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.715706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.933676  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.934591  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.935293  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.72498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:30.936237  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.936447  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.936461  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.939773  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:30.994457  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.015692  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.015753  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.016037  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.016085  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.016400  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.017138  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.034911  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.305615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.130923  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.135145  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.559386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.218965  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.235405  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.718739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.316236  108216 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0919 09:14:31.316275  108216 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0919 09:14:31.316346  108216 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0919 09:14:31.316366  108216 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0919 09:14:31.316373  108216 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0919 09:14:31.316385  108216 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0919 09:14:31.316391  108216 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0919 09:14:31.316429  108216 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
I0919 09:14:31.316469  108216 node_lifecycle_controller.go:770] Node node-0 is NotReady as of 2019-09-19 09:14:31.316452008 +0000 UTC m=+337.983162921. Adding it to the Taint queue.
W0919 09:14:31.316508  108216 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0919 09:14:31.316534  108216 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0919 09:14:31.316519  108216 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"8d25e0e1-8988-4393-9e7a-2d05d02b6dc3", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0919 09:14:31.316565  108216 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"a95af2ef-5ad4-44e7-b079-6e08052e8cc0", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0919 09:14:31.316566  108216 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0919 09:14:31.316572  108216 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"8b8a6aa4-3e00-42eb-b48d-bd6944af6f35", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0919 09:14:31.321408  108216 httplog.go:90] POST /api/v1/namespaces/default/events: (2.369274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.323541  108216 httplog.go:90] POST /api/v1/namespaces/default/events: (1.531574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.323589  108216 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (497.012µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0919 09:14:31.325308  108216 httplog.go:90] POST /api/v1/namespaces/default/events: (1.325122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0919 09:14:31.326203  108216 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.931431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.326450  108216 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-19 09:14:31.322665549 +0000 UTC m=+337.989376454,}] Taint to Node node-0
I0919 09:14:31.326488  108216 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 09:14:31.326604  108216 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 09:14:31.326624  108216 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 09:14:31 +0000 UTC}]
I0919 09:14:31.326688  108216 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/testpod-2 at 2019-09-19 09:14:31.326680891 +0000 UTC m=+337.993391817 to be fired at 2019-09-19 09:14:31.326680891 +0000 UTC m=+337.993391817
I0919 09:14:31.326715  108216 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/testpod-2
I0919 09:14:31.326883  108216 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/testpod-2
I0919 09:14:31.328852  108216 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/events: (1.679986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0919 09:14:31.329047  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.329070  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.329100  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.329120  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.329240  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.329413  108216 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/pods/testpod-2: (2.260853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.330136  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.330332  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.334330  108216 httplog.go:90] GET /api/v1/nodes/node-0: (864.289µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.435119  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.596166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.535507  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.827725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.635243  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.524935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.735123  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.516542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.791039  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.791068  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.791041  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.791099  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.791160  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.791340  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.834967  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.363819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.933845  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.934908  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.935370  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.627155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:31.936341  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.936711  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.936716  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.939957  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:31.994625  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.015882  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.015882  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.016233  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.016284  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.016568  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.017266  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.035065  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.561248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.131098  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.135252  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.672028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.219825  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.235421  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.683157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.329242  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.329240  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.329264  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.329334  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.329361  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.330289  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.330499  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.335320  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.755035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.435305  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.712792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.537281  108216 httplog.go:90] GET /api/v1/nodes/node-0: (3.691155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.635223  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.658193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.734967  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.429467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.791237  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.791237  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.791237  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.791251  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.791262  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.791539  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.835006  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.420996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.934006  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.935161  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.935438  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.819728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:32.936601  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.936872  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.936895  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.940159  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:32.994840  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.016013  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.016026  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.016432  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.016441  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.016794  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.017405  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.035078  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.525187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.131260  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.135183  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.658973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.220035  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.235267  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.585373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.329436  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.329449  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.329460  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.329473  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.329473  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.330455  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.330658  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.335316  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.594781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.434998  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.482715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.535450  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.821414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.635239  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.617545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.735141  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.554482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.791525  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.791525  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.791596  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.791605  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.791686  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.791700  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.834944  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.3991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.934597  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.935321  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.935350  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.785626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:33.936790  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.937022  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.937051  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.940295  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:33.995028  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.016224  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.016468  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.016605  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.016614  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.016936  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.017705  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.035117  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.518072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.131450  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.135598  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.98457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.220209  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.235271  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.623833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.329768  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.329872  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.329870  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.329888  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.329911  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.330685  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.330842  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.335693  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.909997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.435302  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.640758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.535730  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.101425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.635909  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.136981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.735708  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.937659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.791735  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.791785  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.791782  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.791791  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.791919  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.792141  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.835397  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.783317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.934829  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.935209  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.506068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:34.935615  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.937083  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.937256  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.937263  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.940464  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:34.995382  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.016379  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.016703  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.016857  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.016857  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.017007  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.017856  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.036836  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.083745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.131631  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.135227  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.570829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.220433  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.235489  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.749554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.330029  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.330069  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.330093  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.330094  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.330111  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.330857  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.331000  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.335278  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.686613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.435142  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.573006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.535407  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.818595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.587559  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.279641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44250]
I0919 09:14:35.589197  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.156981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44250]
I0919 09:14:35.590591  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (966.362µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44250]
I0919 09:14:35.636069  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.503895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.735057  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.514983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.791945  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.792087  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.791945  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.791965  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.791970  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.792365  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.821906  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.633413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.823921  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.477807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.825411  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.078971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.835449  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.845258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.935019  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.935449  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.757121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:35.935767  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.937276  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.937414  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.937425  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.940673  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:35.995579  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.016501  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.016860  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.017019  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.017088  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.017141  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.018010  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.035285  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.686007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:36.131840  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.135244  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.612393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:36.220625  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.235123  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.531155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:36.316803  108216 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.00035559s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 09:14:36.316858  108216 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0919 09:14:36.316868  108216 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0919 09:14:36.316904  108216 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0919 09:14:36.319866  108216 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.520124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:36.320297  108216 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.003777029s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 09:14:36.320347  108216 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0919 09:14:36.320358  108216 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0919 09:14:36.320366  108216 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0919 09:14:36.321019  108216 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (473.407µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:36.322347  108216 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.745929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0919 09:14:36.322687  108216 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0919 09:14:36.322717  108216 controller_utils.go:124] Update ready status of pods on node [node-1]
I0919 09:14:36.322902  108216 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"a95af2ef-5ad4-44e7-b079-6e08052e8cc0", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0919 09:14:36.323622  108216 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (383.751µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51294]
I0919 09:14:36.324014  108216 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.117299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0919 09:14:36.324120  108216 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.21137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0919 09:14:36.324246  108216 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.00769393s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 09:14:36.324289  108216 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0919 09:14:36.324301  108216 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0919 09:14:36.324310  108216 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0919 09:14:36.324504  108216 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:14:36.320299205 +0000 UTC m=+342.987010127,}] Taint to Node node-0
I0919 09:14:36.325023  108216 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (300.624µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51294]
I0919 09:14:36.326287  108216 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.819473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0919 09:14:36.326502  108216 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:14:36.322857302 +0000 UTC m=+342.989568226,}] Taint to Node node-1
I0919 09:14:36.326539  108216 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0919 09:14:36.327471  108216 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.144275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51294]
I0919 09:14:36.327732  108216 controller_utils.go:180] Recording status change NodeNotReady event message for node node-2
I0919 09:14:36.327740  108216 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.85387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51298]
I0919 09:14:36.327755  108216 controller_utils.go:124] Update ready status of pods on node [node-2]
I0919 09:14:36.327958  108216 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"8b8a6aa4-3e00-42eb-b48d-bd6944af6f35", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-2 status is now: NodeNotReady
I0919 09:14:36.327982  108216 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:14:26 +0000 UTC,}] Taint
I0919 09:14:36.328021  108216 controller_utils.go:204] Added [] Taint to Node node-0
I0919 09:14:36.328172  108216 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (343.587µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51294]
I0919 09:14:36.328516  108216 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (360.254µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51298]
I0919 09:14:36.328869  108216 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:14:26 +0000 UTC,}] Taint
I0919 09:14:36.329111  108216 httplog.go:90] POST /api/v1/namespaces/default/events: (5.279007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.329255  108216 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-2: (981.153µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51290]
I0919 09:14:36.329422  108216 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0919 09:14:36.329958  108216 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (392.779µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.330307  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.330331  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.330375  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.330396  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.330482  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.330571  108216 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.679582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51298]
I0919 09:14:36.330875  108216 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:14:36.327636206 +0000 UTC m=+342.994347137,}] Taint to Node node-2
I0919 09:14:36.330916  108216 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0919 09:14:36.331079  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.331149  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.331601  108216 httplog.go:90] POST /api/v1/namespaces/default/events: (1.179685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.332537  108216 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.814416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51294]
I0919 09:14:36.332946  108216 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 09:14:36.332965  108216 taint_manager.go:438] Updating known taints on node node-0: []
I0919 09:14:36.332981  108216 taint_manager.go:459] All taints were removed from the Node node-0. Cancelling all evictions...
I0919 09:14:36.332992  108216 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions209ca6d9-1178-44da-9fa7-cf221d935630/testpod-2 at 2019-09-19 09:14:36.33298843 +0000 UTC m=+342.999699351
I0919 09:14:36.334353  108216 httplog.go:90] GET /api/v1/nodes/node-0: (908.205µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.435307  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.680892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.535157  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.563864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.635362  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.726055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.735320  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.729842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.792343  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.792343  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.792343  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.792397  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.792530  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.792538  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.835472  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.842165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.935175  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.935409  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.811647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:36.935981  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.937520  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.937544  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.937659  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.940921  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:36.995733  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.016697  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.017038  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.017178  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.017405  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.017414  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.018160  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.035408  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.706826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.132175  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.135345  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.642101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.220801  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.235495  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.783952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.330483  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.330483  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.330538  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.330541  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.330480  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.331237  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.331249  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.335294  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.688599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.435313  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.780069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.535122  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.549951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.635348  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.649332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.735277  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.64875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.792824  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.792939  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.792949  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.793108  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.793196  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.793342  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.835358  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.733166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.935251  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.619273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:37.935368  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.936157  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.937678  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.937850  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.937683  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.941067  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:37.995927  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.016901  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.017258  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.017313  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.017575  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.017581  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.018322  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.035376  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.802543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.132522  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.135308  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.609755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.221014  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.235578  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.957308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.330735  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.330740  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.330780  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.330800  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.330750  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.331422  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.331434  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.335822  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.188942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.435467  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.746916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.535228  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.510867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.635561  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.863064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.734969  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.379843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.793060  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.793066  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.793074  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.793345  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.793467  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.793551  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.835391  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.743053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.935523  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.935534  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.835049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:38.936324  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.938062  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.938089  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.938105  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.941273  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:38.996103  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.017190  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.017520  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.017524  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.017700  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.017746  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.018605  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.035299  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.69981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.132765  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.135532  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.845337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.214678  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.311022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0919 09:14:39.216340  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.096967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0919 09:14:39.217883  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (958.555µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0919 09:14:39.221204  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.235295  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.591266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.330938  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.330988  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.330938  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.330945  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.331035  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.331616  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.331683  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.335263  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.676859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.435193  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.665494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.535478  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.847119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.635217  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.562062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.735378  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.779106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.793266  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.793392  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.793462  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.793470  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.793586  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.793810  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.835497  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.791234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.935535  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.833553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:39.935722  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.936470  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.938231  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.938259  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.938266  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.941490  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:39.996370  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.017444  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.017740  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.017756  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.017896  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.017961  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.018782  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.035535  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.76727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.133065  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.135368  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.710643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.221396  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.235441  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.717914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.331175  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.331187  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.331196  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.331227  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.331187  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.331778  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.331843  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.335189  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.546217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.435296  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.689377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.535488  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.769528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.635391  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.675198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.725707  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.665487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:40.727520  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.287592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:40.729203  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.234947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:40.734833  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.309169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.793590  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.793626  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.793591  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.793731  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.793856  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.793955  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.835165  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.575588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.935347  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.744118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:40.935982  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.936661  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.938534  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.938718  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.938564  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.941815  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:40.996564  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.017894  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.017897  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.017877  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.018069  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.018072  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.018953  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.035564  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.863784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.133280  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.135408  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.71789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.221623  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.235560  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.852157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.331391  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.331391  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.331415  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.331419  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.331445  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.331945  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.331985  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.333026  108216 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.016582562s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:14:41.333167  108216 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.016723442s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:41.333291  108216 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.016853775s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:41.333385  108216 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.016946431s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:41.333536  108216 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-19 09:14:41.333494692 +0000 UTC m=+348.000205618. Adding it to the Taint queue.
I0919 09:14:41.333688  108216 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.017172073s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:14:41.333793  108216 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.017276554s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:41.333885  108216 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.017368812s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:41.333977  108216 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.017460354s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:41.334123  108216 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-19 09:14:41.334103017 +0000 UTC m=+348.000813944. Adding it to the Taint queue.
I0919 09:14:41.334235  108216 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.017685804s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:14:41.334356  108216 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.017807083s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:41.334466  108216 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.017916822s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:41.334559  108216 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.018009566s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:41.334727  108216 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-19 09:14:41.33470681 +0000 UTC m=+348.001417776. Adding it to the Taint queue.
I0919 09:14:41.335395  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.829556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.435266  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.692475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.535489  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.804334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.635322  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.686145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.735410  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.753848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.793814  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.793864  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.793814  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.793824  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.794039  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.794129  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.835562  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.867325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.935558  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.848072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:41.936155  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.936881  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.938795  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.938919  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.938940  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.942005  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:41.996752  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.018149  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.018143  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.018147  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.018225  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.018288  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.019135  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.035229  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.58852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.133467  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.135593  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.845333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.222007  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.235516  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.886721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.331715  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.331869  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.331893  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.331899  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.331868  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.332162  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.332185  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.335330  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.714604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.435691  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.108432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.535665  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.965011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.635275  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.642015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.735436  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.810736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.794119  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.794180  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.794212  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.794256  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.794127  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.794125  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.835476  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.841111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.935439  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.821948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:42.936323  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.937051  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.939004  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.939024  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.939034  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.942235  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:42.997006  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.018550  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.018574  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.018624  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.018660  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.018827  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.019359  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.035326  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.723488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.133722  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.135535  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.935141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.222180  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.235193  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.606914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.331887  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.332051  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.332058  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.332070  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.332058  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.332296  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.332314  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.335401  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.776573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.436114  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.576661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.535175  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.533533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.635482  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.765088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.735441  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.799711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.794349  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.794400  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.794442  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.794472  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.794625  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.794668  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.835561  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.980858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.935455  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.753549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:43.936494  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.937238  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.939190  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.939206  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.939194  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.942428  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:43.997186  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.018780  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.018973  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.018987  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.018995  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.019052  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.019504  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.035280  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.706223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.133892  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.135605  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.90266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.222384  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.236928  108216 httplog.go:90] GET /api/v1/nodes/node-0: (3.333481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.332072  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.332246  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.332259  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.332266  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.332356  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.332446  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.332560  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.335272  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.676016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.435200  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.65861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.536161  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.531478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.635344  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.754387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.735341  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.830339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.794535  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.794535  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.794661  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.794752  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.794798  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.794816  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.835124  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.549583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.935131  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.511752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:44.936704  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.937416  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.939410  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.939448  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.939453  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.942596  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:44.997386  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.019121  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.019270  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.019306  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.019347  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.019457  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.019684  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.035238  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.626437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.134061  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.135517  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.8217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.222559  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.235299  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.761087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.332236  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.332402  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.332414  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.332568  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.332595  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.332595  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.332712  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.335363  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.763926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.435359  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.757151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.535448  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.820339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.587941  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.526094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44250]
I0919 09:14:45.589884  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.517181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44250]
I0919 09:14:45.591598  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.144721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44250]
I0919 09:14:45.635365  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.780704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.736350  108216 httplog.go:90] GET /api/v1/nodes/node-0: (2.014548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.794724  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.794724  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.794836  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.794985  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.794989  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.795099  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.822825  108216 httplog.go:90] GET /api/v1/namespaces/default: (2.491863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.824619  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.363509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.826181  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.036278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.836263  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.479774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.935350  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.722447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:45.936868  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.937574  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.939625  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.939625  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.939686  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.942752  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:45.997594  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.019288  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.019434  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.019450  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.019461  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.019621  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.019901  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.035181  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.588297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.134216  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.135176  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.565153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.222769  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.235335  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.751299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.332422  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.332590  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.332594  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.332682  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.332720  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.332746  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.332895  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.335022  108216 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.018502701s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:14:46.335067  108216 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.018552737s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:46.335079  108216 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.018565387s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:46.335090  108216 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.018576012s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:46.335169  108216 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.018621646s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:14:46.335187  108216 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.018640451s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:46.335206  108216 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.018657011s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:46.335221  108216 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.018673552s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:46.335229  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.539827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.335273  108216 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.018837001s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:14:46.335377  108216 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.018938412s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:46.335406  108216 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.018968716s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:46.335423  108216 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.018986312s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:14:26 +0000 UTC,LastTransitionTime:2019-09-19 09:14:36 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:14:46.435301  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.735477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.535543  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.960764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.635206  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.563855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.735573  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.944652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.794988  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.795008  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.795124  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.795058  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.795201  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.795344  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.835199  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.644118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.935276  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.648218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:46.937054  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.937722  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.939720  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.939820  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.939840  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.942917  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:46.997780  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.019822  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.019851  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.019947  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.019967  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.019981  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.020116  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.035118  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.502724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.134404  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.135630  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.90903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.222970  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.235404  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.842782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.332744  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.332744  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.332745  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.332824  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.332831  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.332905  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.333027  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.335164  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.561537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.433732  108216 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.515617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:47.435006  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.523714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.435298  108216 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.021104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:47.436582  108216 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (854.818µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52662]
I0919 09:14:47.535256  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.654079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.635617  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.982574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.735401  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.758438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.795323  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.795331  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.795545  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.795570  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.795588  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.795599  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.835236  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.617055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.935325  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.713282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:47.937246  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.937884  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.940061  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.940070  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.940081  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.943201  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:47.997959  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.019998  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.019996  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.020195  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.020214  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.020328  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.020340  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.035442  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.880858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.134817  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.135037  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.512827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.223187  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.235265  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.655235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.333131  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.333282  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.333306  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.333435  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.333571  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.333679  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.333836  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.335357  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.772745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.435453  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.856416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.535287  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.641467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.635067  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.486286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.735225  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.614057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.795508  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.795512  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.795673  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.795688  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.795714  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.795821  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.835179  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.589437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.935276  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.691469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:48.937412  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.938029  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.940197  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.940209  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.940219  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.943410  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:48.998143  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:49.020220  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:49.020227  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:49.020331  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:49.020330  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:49.020448  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:49.020451  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:49.035386  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.774874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:49.134989  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:49.135177  108216 httplog.go:90] GET /api/v1/nodes/node-0: (1.560685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0919 09:14:49.214806  108216 httplog.go:90] GET /api/v1/namespaces/default: (1.407592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0919 09:14:49.216404  108216 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.068523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0919 09:14:49.217819  108216 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (917.556µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0919 09:14:49.223362  108216 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:14:49.235085  108216 httplog.go:90] GET