This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 1692 succeeded
Started2019-06-11 23:57
Elapsed28m26s
Revision
Buildergke-prow-containerd-pool-bigger-170c3937-xm4n
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/26afecf8-985d-4a6e-9a4e-39d6509d5e53/targets/test'}}
pod839e1c1d-8ca4-11e9-9d6a-ca36d9701dd2
resultstorehttps://source.cloud.google.com/results/invocations/26afecf8-985d-4a6e-9a4e-39d6509d5e53/targets/test
infra-commit19468be75
pod839e1c1d-8ca4-11e9-9d6a-ca36d9701dd2
repok8s.io/kubernetes
repo-commita8b6cf287b0a73bcb3f7abf0eba9b857a2802239
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestUnreservePlugin 5.03s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestUnreservePlugin$
I0612 00:17:03.121768  109237 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0612 00:17:03.121795  109237 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0612 00:17:03.121804  109237 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0612 00:17:03.121814  109237 master.go:233] Using reconciler: 
I0612 00:17:03.123510  109237 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.123625  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.123645  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.123687  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.123770  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.124065  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.124251  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.124350  109237 store.go:1343] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0612 00:17:03.124381  109237 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.124430  109237 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0612 00:17:03.124571  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.124592  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.124620  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.124715  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.125223  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.125300  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.125431  109237 store.go:1343] Monitoring events count at <storage-prefix>//events
I0612 00:17:03.125541  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.125606  109237 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.125787  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.125838  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.125881  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.125800  109237 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0612 00:17:03.126542  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.126888  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.126997  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.127078  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.127230  109237 store.go:1343] Monitoring limitranges count at <storage-prefix>//limitranges
I0612 00:17:03.127306  109237 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0612 00:17:03.127307  109237 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.127412  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.127486  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.127554  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.127634  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.127919  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.127974  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.128096  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.128263  109237 store.go:1343] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0612 00:17:03.128344  109237 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0612 00:17:03.128468  109237 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.128584  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.128601  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.128631  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.128670  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.128904  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.128979  109237 store.go:1343] Monitoring secrets count at <storage-prefix>//secrets
I0612 00:17:03.129065  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.129107  109237 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0612 00:17:03.129244  109237 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.129308  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.129334  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.129350  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.129385  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.129427  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.129641  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.129681  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.129766  109237 store.go:1343] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0612 00:17:03.129816  109237 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0612 00:17:03.130010  109237 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.130098  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.130116  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.130153  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.130233  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.130663  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.130687  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.131082  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.131131  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.131264  109237 store.go:1343] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0612 00:17:03.131302  109237 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0612 00:17:03.131463  109237 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.131622  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.131634  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.131655  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.131689  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.131964  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.132040  109237 store.go:1343] Monitoring configmaps count at <storage-prefix>//configmaps
I0612 00:17:03.132140  109237 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.132211  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.132221  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.132254  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.132291  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.132435  109237 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0612 00:17:03.132475  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.132784  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.132850  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.132983  109237 store.go:1343] Monitoring namespaces count at <storage-prefix>//namespaces
I0612 00:17:03.133047  109237 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0612 00:17:03.133208  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.133207  109237 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.133313  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.133329  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.133362  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.133438  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.133734  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.133839  109237 store.go:1343] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0612 00:17:03.133940  109237 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.133990  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.133997  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.134016  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.134045  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.134066  109237 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0612 00:17:03.134292  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.135179  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.135823  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.136045  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.136115  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.136599  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.137170  109237 store.go:1343] Monitoring nodes count at <storage-prefix>//minions
I0612 00:17:03.137222  109237 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0612 00:17:03.137464  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.137561  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.137582  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.137650  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.138283  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.139543  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.139946  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.140046  109237 store.go:1343] Monitoring pods count at <storage-prefix>//pods
I0612 00:17:03.140172  109237 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.140245  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.140254  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.140274  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.140320  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.140360  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.140420  109237 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0612 00:17:03.140646  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.140709  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.140747  109237 store.go:1343] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0612 00:17:03.140819  109237 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0612 00:17:03.140959  109237 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.141060  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.141117  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.141175  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.141266  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.141267  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.141505  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.141617  109237 store.go:1343] Monitoring services count at <storage-prefix>//services/specs
I0612 00:17:03.141620  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.141635  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.141643  109237 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.141681  109237 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0612 00:17:03.141716  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.141726  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.141760  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.141825  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.142106  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.142174  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.142207  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.142219  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.142241  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.142247  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.142301  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.143346  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.143407  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.143598  109237 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.143666  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.143681  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.143706  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.143925  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.144175  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.144295  109237 store.go:1343] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0612 00:17:03.144320  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.144384  109237 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0612 00:17:03.144739  109237 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.144918  109237 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.145227  109237 watch_cache.go:405] Replace watchCache (rev: 22389) 
I0612 00:17:03.145723  109237 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.146513  109237 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.147019  109237 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.147938  109237 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.148729  109237 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.148836  109237 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.150046  109237 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.151441  109237 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.151994  109237 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.152159  109237 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.152923  109237 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.153121  109237 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.153518  109237 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.153668  109237 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.154419  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.154640  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.154763  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.154859  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.154995  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.155109  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.155258  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.155916  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.156181  109237 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.156719  109237 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.157490  109237 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.157666  109237 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.157916  109237 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.158635  109237 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.158904  109237 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.159608  109237 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.160202  109237 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.160891  109237 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.161481  109237 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.161672  109237 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.161762  109237 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0612 00:17:03.161789  109237 master.go:425] Enabling API group "authentication.k8s.io".
I0612 00:17:03.161801  109237 master.go:425] Enabling API group "authorization.k8s.io".
I0612 00:17:03.161940  109237 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.162040  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.162053  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.162174  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.162243  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.162667  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.162783  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.162926  109237 store.go:1343] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0612 00:17:03.162972  109237 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0612 00:17:03.163085  109237 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.163222  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.163259  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.163286  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.163353  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.163755  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.163911  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.164090  109237 store.go:1343] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0612 00:17:03.164260  109237 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.164316  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.164399  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.164358  109237 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0612 00:17:03.164609  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.164636  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.164894  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.165099  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.165401  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.165439  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.165531  109237 store.go:1343] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0612 00:17:03.165548  109237 master.go:425] Enabling API group "autoscaling".
I0612 00:17:03.165552  109237 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0612 00:17:03.165702  109237 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.165773  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.165795  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.165822  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.165857  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.166113  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.166271  109237 store.go:1343] Monitoring jobs.batch count at <storage-prefix>//jobs
I0612 00:17:03.166273  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.166316  109237 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0612 00:17:03.166411  109237 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.166518  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.166536  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.166564  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.166625  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.166800  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.167060  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.167099  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.167179  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.167232  109237 store.go:1343] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0612 00:17:03.167250  109237 master.go:425] Enabling API group "batch".
I0612 00:17:03.167285  109237 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0612 00:17:03.167396  109237 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.167470  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.167487  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.167517  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.167630  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.167874  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.167956  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.168038  109237 store.go:1343] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0612 00:17:03.168060  109237 master.go:425] Enabling API group "certificates.k8s.io".
I0612 00:17:03.168069  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.168111  109237 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0612 00:17:03.168286  109237 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.168378  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.168389  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.168566  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.168608  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.168902  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.169049  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.169204  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.169278  109237 store.go:1343] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0612 00:17:03.169431  109237 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.169486  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.169493  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.169516  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.169542  109237 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0612 00:17:03.169599  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.170660  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.170903  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.170990  109237 store.go:1343] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0612 00:17:03.171016  109237 master.go:425] Enabling API group "coordination.k8s.io".
I0612 00:17:03.171142  109237 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.171213  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.171229  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.171268  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.171374  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.171414  109237 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0612 00:17:03.172594  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.173166  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.173386  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.173465  109237 store.go:1343] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0612 00:17:03.173596  109237 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.173669  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.173682  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.173744  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.173777  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.173803  109237 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0612 00:17:03.173958  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.174338  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.174439  109237 store.go:1343] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0612 00:17:03.174549  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.174604  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.174615  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.174642  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.174695  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.174737  109237 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0612 00:17:03.174878  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.175043  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.175147  109237 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0612 00:17:03.175323  109237 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.175405  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.175421  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.175439  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.175449  109237 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0612 00:17:03.175498  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.175640  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.175876  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.175947  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.176014  109237 store.go:1343] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0612 00:17:03.176053  109237 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0612 00:17:03.176170  109237 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.176279  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.176294  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.176325  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.176401  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.176604  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.176633  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.176733  109237 store.go:1343] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0612 00:17:03.176817  109237 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0612 00:17:03.177064  109237 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.177167  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.177240  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.177305  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.177354  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.177619  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.177698  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.177752  109237 store.go:1343] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0612 00:17:03.177797  109237 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0612 00:17:03.177940  109237 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.178017  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.178039  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.178078  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.178133  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.178660  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.178707  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.178779  109237 store.go:1343] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0612 00:17:03.178801  109237 master.go:425] Enabling API group "extensions".
I0612 00:17:03.178825  109237 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0612 00:17:03.179016  109237 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.179833  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.179964  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.179991  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.180057  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.180067  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.180093  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.179456  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.180219  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.179572  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.179606  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.181525  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.181583  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.181667  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.181690  109237 store.go:1343] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0612 00:17:03.181762  109237 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0612 00:17:03.181863  109237 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.181931  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.181941  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.181967  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.182018  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.182822  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.182896  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.182967  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.183078  109237 store.go:1343] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0612 00:17:03.183099  109237 master.go:425] Enabling API group "networking.k8s.io".
I0612 00:17:03.183147  109237 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.183404  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.183466  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.183498  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.183505  109237 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0612 00:17:03.183599  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.184237  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.184331  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.184369  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.184970  109237 store.go:1343] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0612 00:17:03.184996  109237 master.go:425] Enabling API group "node.k8s.io".
I0612 00:17:03.185044  109237 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0612 00:17:03.185131  109237 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.185741  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.185791  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.185791  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.185826  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.185970  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.186720  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.186796  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.186844  109237 store.go:1343] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0612 00:17:03.186910  109237 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0612 00:17:03.187389  109237 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.187470  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.187479  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.187504  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.187553  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.187783  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.187892  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.187980  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.188294  109237 store.go:1343] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0612 00:17:03.188320  109237 master.go:425] Enabling API group "policy".
I0612 00:17:03.188346  109237 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0612 00:17:03.188349  109237 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.188408  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.188418  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.188446  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.188495  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.189146  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.204513  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.204662  109237 store.go:1343] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0612 00:17:03.204788  109237 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0612 00:17:03.204913  109237 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.205025  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.205039  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.205073  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.205135  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.205227  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.205483  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.205714  109237 store.go:1343] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0612 00:17:03.205753  109237 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.205835  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.205858  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.205917  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.205971  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.206032  109237 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0612 00:17:03.206213  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.206785  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.206924  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.206954  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.207038  109237 store.go:1343] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0612 00:17:03.207110  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.207212  109237 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0612 00:17:03.207217  109237 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.207274  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.207280  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.207300  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.207331  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.207613  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.207705  109237 store.go:1343] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0612 00:17:03.207967  109237 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.208014  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.208059  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.208060  109237 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0612 00:17:03.208072  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.208100  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.208102  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.208182  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.208666  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.208754  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.208767  109237 store.go:1343] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0612 00:17:03.208859  109237 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0612 00:17:03.209008  109237 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.209113  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.209137  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.209216  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.209662  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.209675  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.210292  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.210570  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.210603  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.210707  109237 store.go:1343] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0612 00:17:03.210764  109237 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.210780  109237 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0612 00:17:03.210844  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.210868  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.210892  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.210948  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.211162  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.211288  109237 store.go:1343] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0612 00:17:03.211309  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.211354  109237 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0612 00:17:03.211822  109237 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.211906  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.211905  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.211917  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.211945  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.212009  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.212262  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.212274  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.212358  109237 store.go:1343] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0612 00:17:03.212380  109237 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0612 00:17:03.212387  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.212473  109237 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0612 00:17:03.213428  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.214365  109237 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.214453  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.214469  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.214498  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.214783  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.215028  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.215156  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.215205  109237 store.go:1343] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0612 00:17:03.215237  109237 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0612 00:17:03.215341  109237 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.215412  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.215426  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.215454  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.215550  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.215803  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.215910  109237 store.go:1343] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0612 00:17:03.215935  109237 master.go:425] Enabling API group "scheduling.k8s.io".
I0612 00:17:03.216059  109237 master.go:417] Skipping disabled API group "settings.k8s.io".
I0612 00:17:03.216086  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.216154  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.216317  109237 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0612 00:17:03.216506  109237 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.216601  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.216619  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.216648  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.216713  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.217046  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.217158  109237 store.go:1343] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0612 00:17:03.217176  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.217253  109237 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0612 00:17:03.217357  109237 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.217424  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.217442  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.217475  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.217546  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.218024  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.218121  109237 store.go:1343] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0612 00:17:03.218142  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.218178  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.218148  109237 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.218302  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.218321  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.218347  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.218453  109237 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0612 00:17:03.218459  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.219289  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.219681  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.220238  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.220363  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.220547  109237 store.go:1343] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0612 00:17:03.220577  109237 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.220599  109237 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0612 00:17:03.220627  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.220648  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.220674  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.221377  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.221606  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.222435  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.222490  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.222726  109237 store.go:1343] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0612 00:17:03.222865  109237 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.223808  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.223828  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.224493  109237 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0612 00:17:03.225172  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.225975  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.226664  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.231004  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.231712  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.231724  109237 store.go:1343] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0612 00:17:03.231865  109237 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0612 00:17:03.232880  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.232735  109237 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.233120  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.233135  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.233663  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.233926  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.236896  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.237173  109237 store.go:1343] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0612 00:17:03.237217  109237 master.go:425] Enabling API group "storage.k8s.io".
I0612 00:17:03.237280  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.237411  109237 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0612 00:17:03.237698  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.237820  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.237843  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.237912  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.238155  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.238507  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.238676  109237 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0612 00:17:03.238866  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.238964  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.239072  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.239143  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.239156  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.239318  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.239466  109237 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0612 00:17:03.239801  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.240403  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.240536  109237 store.go:1343] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0612 00:17:03.240798  109237 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.240902  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.240915  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.240946  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.241002  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.241067  109237 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0612 00:17:03.241398  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.241701  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.241826  109237 store.go:1343] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0612 00:17:03.241872  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.242041  109237 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0612 00:17:03.242118  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.242268  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.242284  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.242354  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.242478  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.244365  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.244543  109237 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0612 00:17:03.244657  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.244981  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.245017  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.245071  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.245096  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.245108  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.245213  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.245415  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.245727  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.245850  109237 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0612 00:17:03.247464  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.247513  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.247694  109237 store.go:1343] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0612 00:17:03.247934  109237 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.248024  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.248049  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.248080  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.248159  109237 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0612 00:17:03.248603  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.249231  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.249632  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.249826  109237 store.go:1343] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0612 00:17:03.250114  109237 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.250271  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.250302  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.250357  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.250418  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.250464  109237 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0612 00:17:03.250774  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.251167  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.251312  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.251383  109237 store.go:1343] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0612 00:17:03.251509  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.251521  109237 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0612 00:17:03.252077  109237 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.252166  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.252181  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.252234  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.252281  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.252340  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.252582  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.252665  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.252772  109237 store.go:1343] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0612 00:17:03.252824  109237 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0612 00:17:03.253057  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.253919  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.256037  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.256159  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.256179  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.256255  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.256308  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.256632  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.256814  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.256855  109237 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0612 00:17:03.256971  109237 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0612 00:17:03.257148  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.257361  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.257390  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.257462  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.257555  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.258877  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.259237  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.259329  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.259438  109237 store.go:1343] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0612 00:17:03.259507  109237 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0612 00:17:03.259628  109237 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.259719  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.259735  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.259770  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.259816  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.260092  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.260197  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.260275  109237 store.go:1343] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0612 00:17:03.260309  109237 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0612 00:17:03.260446  109237 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.260518  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.260532  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.260584  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.260629  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.260945  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.261026  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.261145  109237 store.go:1343] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0612 00:17:03.261175  109237 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0612 00:17:03.261333  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.261358  109237 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.261476  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.261497  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.261559  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.261637  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.261926  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.261991  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.262011  109237 store.go:1343] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0612 00:17:03.262029  109237 master.go:425] Enabling API group "apps".
I0612 00:17:03.262057  109237 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.262103  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.262113  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.262109  109237 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0612 00:17:03.262140  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.262290  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.263942  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.263973  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.264009  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.264075  109237 store.go:1343] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0612 00:17:03.264095  109237 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.264110  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.264133  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.264139  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.264159  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.264161  109237 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0612 00:17:03.264205  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.264373  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.264460  109237 store.go:1343] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0612 00:17:03.264479  109237 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0612 00:17:03.264509  109237 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.264545  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.264595  109237 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0612 00:17:03.264675  109237 client.go:354] parsed scheme: ""
I0612 00:17:03.264688  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:03.264716  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:03.264827  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.265041  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:03.265127  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.265136  109237 store.go:1343] Monitoring events count at <storage-prefix>//events
I0612 00:17:03.265150  109237 master.go:425] Enabling API group "events.k8s.io".
I0612 00:17:03.265247  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.265430  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:03.265512  109237 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0612 00:17:03.266221  109237 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.266425  109237 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.266671  109237 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.266802  109237 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.266921  109237 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.267011  109237 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.267244  109237 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.267371  109237 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.267464  109237 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.267612  109237 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.268339  109237 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.268606  109237 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.269283  109237 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.269408  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.269583  109237 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.269789  109237 watch_cache.go:405] Replace watchCache (rev: 22390) 
I0612 00:17:03.270665  109237 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.270886  109237 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.271407  109237 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.271582  109237 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.272143  109237 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.272361  109237 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0612 00:17:03.272405  109237 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
I0612 00:17:03.272847  109237 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.272927  109237 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.273059  109237 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.273537  109237 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.274017  109237 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.274568  109237 storage_factory.go:285] storing daemonsets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.274733  109237 storage_factory.go:285] storing daemonsets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.275174  109237 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.275334  109237 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.275499  109237 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.275654  109237 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.276177  109237 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.276367  109237 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.276886  109237 storage_factory.go:285] storing networkpolicies.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.277331  109237 storage_factory.go:285] storing podsecuritypolicies.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.277885  109237 storage_factory.go:285] storing replicasets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.278054  109237 storage_factory.go:285] storing replicasets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.278209  109237 storage_factory.go:285] storing replicasets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.278257  109237 storage_factory.go:285] storing replicationcontrollers.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.278405  109237 storage_factory.go:285] storing replicationcontrollers.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.278941  109237 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.279459  109237 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.279614  109237 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.280019  109237 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0612 00:17:03.280072  109237 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0612 00:17:03.280608  109237 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.280785  109237 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.281108  109237 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.281594  109237 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.281901  109237 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.282362  109237 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.282931  109237 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.283420  109237 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.283823  109237 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.284285  109237 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.284773  109237 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0612 00:17:03.284826  109237 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0612 00:17:03.285275  109237 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.285735  109237 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0612 00:17:03.285792  109237 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0612 00:17:03.286215  109237 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.286602  109237 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.286758  109237 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.287131  109237 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.287432  109237 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.287768  109237 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.288121  109237 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0612 00:17:03.288166  109237 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0612 00:17:03.288751  109237 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.289167  109237 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.289374  109237 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.289817  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.289996  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.290167  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.290662  109237 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.290868  109237 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.291023  109237 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.291494  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.291692  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.291879  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.292573  109237 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.293221  109237 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.293471  109237 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.293988  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.294166  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.294365  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.294888  109237 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.295069  109237 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.295242  109237 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.295792  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.295950  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.296145  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.296738  109237 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.297254  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.297348  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.297495  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.297665  109237 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.298089  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.298285  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.298438  109237 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.298918  109237 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.299386  109237 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.299977  109237 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aeff4417-71b1-4865-8b3f-cbbf56587f7a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0612 00:17:03.302101  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.302129  109237 healthz.go:161] healthz check poststarthook/bootstrap-controller failed: not finished
I0612 00:17:03.302136  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.302143  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.302150  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.302156  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.302304  109237 wrap.go:47] GET /healthz: (274.463µs) 500
goroutine 10888 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082f67e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082f67e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026f13c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f420f0, 0xc00005e1a0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1000)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f420f0, 0xc0080a1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007b64180, 0xc00785dc40, 0x7423e60, 0xc003f420f0, 0xc0080a1000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:03.302589  109237 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.311123ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39496]
I0612 00:17:03.305314  109237 wrap.go:47] GET /api/v1/services: (1.334319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39496]
I0612 00:17:03.309390  109237 wrap.go:47] GET /api/v1/services: (1.041565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39496]
I0612 00:17:03.311636  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.311665  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.311676  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.311686  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.311697  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.311838  109237 wrap.go:47] GET /healthz: (438.603µs) 500
goroutine 10891 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082f7180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082f7180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026f1cc0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f42138, 0xc0063c8a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1c00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f42138, 0xc0080a1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007b64c60, 0xc00785dc40, 0x7423e60, 0xc003f42138, 0xc0080a1c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39496]
I0612 00:17:03.312776  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.325963ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.313113  109237 wrap.go:47] GET /api/v1/services: (705.758µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39496]
I0612 00:17:03.313354  109237 wrap.go:47] GET /api/v1/services: (1.099067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39500]
I0612 00:17:03.315290  109237 wrap.go:47] POST /api/v1/namespaces: (1.678664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.316685  109237 wrap.go:47] GET /api/v1/namespaces/kube-public: (922.528µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.318448  109237 wrap.go:47] POST /api/v1/namespaces: (1.371077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.319428  109237 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (671.687µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.320787  109237 wrap.go:47] POST /api/v1/namespaces: (1.058668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.403067  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.403231  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.403294  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.403312  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.403326  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.403497  109237 wrap.go:47] GET /healthz: (601.282µs) 500
goroutine 10833 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085b91f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085b91f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00490f440, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc008538560, 0xc005734480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc008538560, 0xc002a3a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc008538560, 0xc002a3a200)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc008538560, 0xc002a3a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007cb6f60, 0xc00785dc40, 0x7423e60, 0xc008538560, 0xc002a3a200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:03.412967  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.413047  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.413062  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.413071  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.413079  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.413267  109237 wrap.go:47] GET /healthz: (426.228µs) 500
goroutine 10946 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008aa3dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008aa3dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00490df80, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002a8e780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4700)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002d5a2d8, 0xc002dd4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0077f4720, 0xc00785dc40, 0x7423e60, 0xc002d5a2d8, 0xc002dd4700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.503133  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.503176  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.503201  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.503211  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.503219  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.503375  109237 wrap.go:47] GET /healthz: (412.707µs) 500
goroutine 10911 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002741180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002741180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00488e8c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000b8a6d8, 0xc0057ccd80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12500)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000b8a6d8, 0xc002c12500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00787a4e0, 0xc00785dc40, 0x7423e60, 0xc000b8a6d8, 0xc002c12500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:03.513074  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.513110  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.513120  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.513127  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.513133  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.513351  109237 wrap.go:47] GET /healthz: (446.87µs) 500
goroutine 10913 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002741340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002741340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00488e960, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000b8a6e8, 0xc0057cd380, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12900)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000b8a6e8, 0xc002c12900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00787a5a0, 0xc00785dc40, 0x7423e60, 0xc000b8a6e8, 0xc002c12900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.603112  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.603161  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.603171  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.603177  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.603182  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.603486  109237 wrap.go:47] GET /healthz: (499.244µs) 500
goroutine 10963 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085b9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085b9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00490f7a0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0085385d8, 0xc005734d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3a900)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0085385d8, 0xc002a3a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007cb7200, 0xc00785dc40, 0x7423e60, 0xc0085385d8, 0xc002a3a900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:03.612878  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.612913  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.612924  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.612930  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.612935  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.613055  109237 wrap.go:47] GET /healthz: (300.003µs) 500
goroutine 10965 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085b9650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085b9650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00490f840, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc008538620, 0xc005735380, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc008538620, 0xc002a3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc008538620, 0xc002a3ad00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc008538620, 0xc002a3ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007cb72c0, 0xc00785dc40, 0x7423e60, 0xc008538620, 0xc002a3ad00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.703156  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.703216  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.703229  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.703239  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.703270  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.703441  109237 wrap.go:47] GET /healthz: (493.528µs) 500
goroutine 10948 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008aa3f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008aa3f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048842c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002d5a350, 0xc002a8ed80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5300)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002d5a350, 0xc002dd5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0077f4a20, 0xc00785dc40, 0x7423e60, 0xc002d5a350, 0xc002dd5300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:03.712888  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.712924  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.712937  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.712946  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.712953  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.713139  109237 wrap.go:47] GET /healthz: (408.159µs) 500
goroutine 10978 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008464bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008464bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048cdc80, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000153b20, 0xc001e44900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000153b20, 0xc003891d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000153b20, 0xc003891c00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000153b20, 0xc003891c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007903c20, 0xc00785dc40, 0x7423e60, 0xc000153b20, 0xc003891c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.803142  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.803222  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.803235  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.803248  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.803256  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.803456  109237 wrap.go:47] GET /healthz: (463.852µs) 500
goroutine 10995 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002741490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002741490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00488f1e0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000b8a768, 0xc0040fe000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13200)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000b8a768, 0xc002c13200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00787b2c0, 0xc00785dc40, 0x7423e60, 0xc000b8a768, 0xc002c13200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:03.818294  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.818421  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.818507  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.818565  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.818584  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.844914  109237 wrap.go:47] GET /healthz: (26.777923ms) 500
goroutine 10980 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008464d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008464d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048cdda0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000153b80, 0xc001e45200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0800)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000153b80, 0xc0028d0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007903da0, 0xc00785dc40, 0x7423e60, 0xc000153b80, 0xc0028d0800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:03.903114  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.903165  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.903175  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.903181  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.903198  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.903342  109237 wrap.go:47] GET /healthz: (358.549µs) 500
goroutine 10982 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008464ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008464ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048cdf60, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000153c58, 0xc001e45800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1c00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000153c58, 0xc0028d1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0075c89c0, 0xc00785dc40, 0x7423e60, 0xc000153c58, 0xc0028d1c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:03.912953  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:03.912998  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:03.913012  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:03.913021  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:03.913029  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:03.913172  109237 wrap.go:47] GET /healthz: (366.167µs) 500
goroutine 10997 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002741650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002741650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00488f480, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000b8a7c8, 0xc0040fe780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13d00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000b8a7c8, 0xc002c13d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00787b7a0, 0xc00785dc40, 0x7423e60, 0xc000b8a7c8, 0xc002c13d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:04.003489  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:04.003529  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.003542  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:04.003552  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:04.003559  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:04.003712  109237 wrap.go:47] GET /healthz: (392.509µs) 500
goroutine 10999 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002741960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002741960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00488f8c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0040fef00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2200)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000b8a7d0, 0xc0027d2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00787be60, 0xc00785dc40, 0x7423e60, 0xc000b8a7d0, 0xc0027d2200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:04.012955  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:04.012989  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.013002  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:04.013012  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:04.013019  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:04.013161  109237 wrap.go:47] GET /healthz: (347.925µs) 500
goroutine 11001 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002741ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002741ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004866040, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0040ff680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2e00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000b8a7f0, 0xc0027d2e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0075a26c0, 0xc00785dc40, 0x7423e60, 0xc000b8a7f0, 0xc0027d2e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:04.103051  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:04.103080  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.103093  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:04.103102  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:04.103111  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:04.103264  109237 wrap.go:47] GET /healthz: (362.725µs) 500
goroutine 10967 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085b97a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085b97a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00490fe40, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc008538698, 0xc005735c80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc008538698, 0xc002a3b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc008538698, 0xc002a3b300)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc008538698, 0xc002a3b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007cb7500, 0xc00785dc40, 0x7423e60, 0xc008538698, 0xc002a3b300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:04.113539  109237 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0612 00:17:04.113572  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.113584  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:04.113593  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:04.113600  109237 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:04.113748  109237 wrap.go:47] GET /healthz: (338.513µs) 500
goroutine 10969 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085b98f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085b98f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00485e1e0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc008538718, 0xc00412e480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc008538718, 0xc002a3bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc008538718, 0xc002a3bb00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc008538718, 0xc002a3bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007cb76e0, 0xc00785dc40, 0x7423e60, 0xc008538718, 0xc002a3bb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:04.122590  109237 client.go:354] parsed scheme: ""
I0612 00:17:04.122622  109237 client.go:354] scheme "" not registered, fallback to default scheme
I0612 00:17:04.122692  109237 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0612 00:17:04.122781  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:04.123271  109237 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0612 00:17:04.123353  109237 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0612 00:17:04.204753  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.204785  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:04.204795  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:04.204803  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:04.205403  109237 wrap.go:47] GET /healthz: (2.480481ms) 500
goroutine 11010 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085b9b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085b9b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00485f100, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc008538828, 0xc00010f340, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc008538828, 0xc001bd4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc008538828, 0xc001bd4900)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc008538828, 0xc001bd4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007580480, 0xc00785dc40, 0x7423e60, 0xc008538828, 0xc001bd4900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39494]
I0612 00:17:04.214854  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.214889  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:04.214899  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:04.214907  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:04.215073  109237 wrap.go:47] GET /healthz: (2.367783ms) 500
goroutine 10950 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008472150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008472150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004884c40, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002d5a388, 0xc00010f760, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5a00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002d5a388, 0xc002dd5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0077f5380, 0xc00785dc40, 0x7423e60, 0xc002d5a388, 0xc002dd5a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:04.312543  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.312579  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:04.312590  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:04.312598  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:04.312805  109237 wrap.go:47] GET /healthz: (9.779292ms) 500
goroutine 10960 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008472540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008472540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004885880, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002d5a3f0, 0xc0063666e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de2e00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002d5a3f0, 0xc000de2e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00737acc0, 0xc00785dc40, 0x7423e60, 0xc002d5a3f0, 0xc000de2e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39636]
I0612 00:17:04.313147  109237 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (11.014127ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.313268  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (10.967421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39498]
I0612 00:17:04.313498  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (10.937236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39494]
I0612 00:17:04.328039  109237 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (11.760402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39640]
I0612 00:17:04.328649  109237 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (12.029834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.329044  109237 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0612 00:17:04.337119  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.337166  109237 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0612 00:17:04.337177  109237 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0612 00:17:04.337197  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0612 00:17:04.337355  109237 wrap.go:47] GET /healthz: (23.097185ms) 500
goroutine 10984 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008465420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008465420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004850b00, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000153cf0, 0xc004100840, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3c00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000153cf0, 0xc0025d3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0075c93e0, 0xc00785dc40, 0x7423e60, 0xc000153cf0, 0xc0025d3c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.337968  109237 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (8.400656ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.338492  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (24.865562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39498]
I0612 00:17:04.338906  109237 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (8.485697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39640]
I0612 00:17:04.350146  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (10.54379ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39498]
I0612 00:17:04.350636  109237 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (11.649628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.350933  109237 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0612 00:17:04.350949  109237 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0612 00:17:04.351818  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (954.057µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39640]
I0612 00:17:04.355284  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.661944ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.356534  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (933.358µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.357611  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (723.116µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.359177  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.232546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.361352  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (827.404µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.362354  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (677.569µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.367322  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.683844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.367545  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0612 00:17:04.372812  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (5.004166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.375257  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.020377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.375464  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0612 00:17:04.384170  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (8.284947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.386961  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.2491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.387537  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0612 00:17:04.388875  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.106363ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.391341  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.882352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.391527  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0612 00:17:04.392529  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (838.226µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.396085  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.931617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.396275  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0612 00:17:04.397352  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (946.545µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.399535  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.894893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.400046  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0612 00:17:04.403383  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (3.076664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.404455  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.404484  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.404615  109237 wrap.go:47] GET /healthz: (1.397556ms) 500
goroutine 11079 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084a7810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084a7810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00479ae60, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f42560, 0xc000077a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f42560, 0xc004296600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f42560, 0xc004296500)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f42560, 0xc004296500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00706c600, 0xc00785dc40, 0x7423e60, 0xc003f42560, 0xc004296500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:04.405616  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.730962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.405836  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0612 00:17:04.406855  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (829.69µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.408504  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.352389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.408660  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0612 00:17:04.410095  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.31271ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.412827  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.378251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.413042  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0612 00:17:04.424370  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.424408  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.424599  109237 wrap.go:47] GET /healthz: (11.027892ms) 500
goroutine 11107 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084cc3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084cc3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00474bae0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc008360c20, 0xc00385c500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc008360c20, 0xc00448b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc008360c20, 0xc00448b800)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc008360c20, 0xc00448b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006e1efc0, 0xc00785dc40, 0x7423e60, 0xc008360c20, 0xc00448b800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.425832  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (12.6764ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.429239  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.89483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.429467  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0612 00:17:04.430887  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.268874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.441959  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.1504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.442248  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0612 00:17:04.443529  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.017186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.448202  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.274008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.448457  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0612 00:17:04.454471  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.247279ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.457006  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.137306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.457242  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0612 00:17:04.459410  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (2.005857ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.462669  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.189865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.463011  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0612 00:17:04.464427  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.247978ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.469800  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.395488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.470011  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0612 00:17:04.471089  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (891.719µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.473747  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.288408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.473914  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0612 00:17:04.474957  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (893.276µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.476661  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.304827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.476875  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0612 00:17:04.477841  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (784.28µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.479230  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.097154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.479419  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0612 00:17:04.481019  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.438613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.482694  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.226147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.482904  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0612 00:17:04.484443  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.427199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.486593  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.729908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.486904  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0612 00:17:04.493337  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (6.155691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.495606  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.586323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.495792  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0612 00:17:04.498107  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (2.169863ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.500886  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.440336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.501174  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0612 00:17:04.502107  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (776.788µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.504094  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.648631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.504368  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0612 00:17:04.505314  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (756.926µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.506884  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.261006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.507051  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0612 00:17:04.507255  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.507273  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.507417  109237 wrap.go:47] GET /healthz: (4.696043ms) 500
goroutine 11113 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084cdce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084cdce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0045fab60, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc008361430, 0xc00385cdc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc008361430, 0xc007d0bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc008361430, 0xc007d0b900)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc008361430, 0xc007d0b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0051e2240, 0xc00785dc40, 0x7423e60, 0xc008361430, 0xc007d0b900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:04.508558  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (960.315µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.513727  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.789646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.513960  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0612 00:17:04.513963  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.513984  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.514110  109237 wrap.go:47] GET /healthz: (890.038µs) 500
goroutine 11177 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00851f810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00851f810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0045c7ae0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000515398, 0xc0044c2c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000515398, 0xc00931f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000515398, 0xc00931f300)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000515398, 0xc00931f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006720a80, 0xc00785dc40, 0x7423e60, 0xc000515398, 0xc00931f300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.516600  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (2.436171ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.518678  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.537393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.518876  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0612 00:17:04.519767  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (740.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.521713  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.423551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.521916  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0612 00:17:04.522809  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (722.437µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.526214  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.543762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.526454  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0612 00:17:04.527948  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.318127ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.530132  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.644291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.535954  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0612 00:17:04.537286  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (994.104µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.541497  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.389251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.556178  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0612 00:17:04.558738  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.064256ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.562796  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.777764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.563129  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0612 00:17:04.567107  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (825.742µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.569059  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.596244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.569291  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0612 00:17:04.570110  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (661.963µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.572723  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.316656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.572920  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0612 00:17:04.574666  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.511471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.575865  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (951.687µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.576014  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0612 00:17:04.576839  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (532.506µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.578373  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.237532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.578546  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0612 00:17:04.579389  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (714.681µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.581102  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.222991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.581413  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0612 00:17:04.582254  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (634.428µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.583697  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.002329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.583955  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0612 00:17:04.584815  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (631.672µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.586447  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.271726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.586646  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0612 00:17:04.587812  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (715.202µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.589518  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.292188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.589740  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0612 00:17:04.590747  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (844.403µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.592914  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.65669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.593324  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0612 00:17:04.594334  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (791.427µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.596460  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.727096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.596682  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0612 00:17:04.598162  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.292721ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.599542  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.017222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.599874  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0612 00:17:04.600945  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (857.19µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.603501  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.004377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.603727  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0612 00:17:04.604184  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.604220  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.604364  109237 wrap.go:47] GET /healthz: (1.562168ms) 500
goroutine 11254 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008572fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008572fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00425a5a0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000515c68, 0xc00385d400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1200)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000515c68, 0xc0064b1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004926480, 0xc00785dc40, 0x7423e60, 0xc000515c68, 0xc0064b1200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:04.605079  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.18459ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.607064  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.567684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.607412  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0612 00:17:04.608543  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (893.324µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.610554  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.571855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.610744  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0612 00:17:04.611716  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (798.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.613343  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.613369  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.613389  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.613712  109237 wrap.go:47] GET /healthz: (903.38µs) 500
goroutine 11243 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085cc070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085cc070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0041b4cc0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f434a8, 0xc0044c32c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f434a8, 0xc006618500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f434a8, 0xc006618400)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f434a8, 0xc006618400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003a398c0, 0xc00785dc40, 0x7423e60, 0xc003f434a8, 0xc006618400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.613717  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0612 00:17:04.614909  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (901.466µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.617205  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.774715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.617408  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0612 00:17:04.618471  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (859.123µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.620240  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.448026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.620448  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0612 00:17:04.621660  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (976.724µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.623406  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.364725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.623656  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0612 00:17:04.624762  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (910.857µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.626537  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.348251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.626718  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0612 00:17:04.627642  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (711.719µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.629302  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.246358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.629512  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0612 00:17:04.630481  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (779.607µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.632129  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.222707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.632323  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0612 00:17:04.633390  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (881.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.634829  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.050899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.635013  109237 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0612 00:17:04.636164  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (973.555µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.643643  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.919055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.644248  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0612 00:17:04.663132  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.30754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.684158  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.30073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.684489  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0612 00:17:04.703153  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.321831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.703586  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.703645  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.703894  109237 wrap.go:47] GET /healthz: (1.122721ms) 500
goroutine 11330 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00860a700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00860a700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0040737a0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000515fb0, 0xc003432500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000515fb0, 0xc006dad000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000515fb0, 0xc006dacf00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000515fb0, 0xc006dacf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004931500, 0xc00785dc40, 0x7423e60, 0xc000515fb0, 0xc006dacf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:04.713752  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.713854  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.714057  109237 wrap.go:47] GET /healthz: (1.283042ms) 500
goroutine 11248 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085cd0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085cd0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fee840, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f43710, 0xc002634a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f43710, 0xc006619300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f43710, 0xc006619200)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f43710, 0xc006619200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0042ef260, 0xc00785dc40, 0x7423e60, 0xc003f43710, 0xc006619200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.723660  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.886064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.724016  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0612 00:17:04.743231  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.426748ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.763844  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.051214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.764100  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0612 00:17:04.783637  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.560326ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.804252  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.804353  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.804466  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.61057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.804534  109237 wrap.go:47] GET /healthz: (1.553675ms) 500
goroutine 11365 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008614d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008614d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003e018a0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002b58638, 0xc0026352c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002b58638, 0xc007cde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002b58638, 0xc007cde200)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002b58638, 0xc007cde200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00409ccc0, 0xc00785dc40, 0x7423e60, 0xc002b58638, 0xc007cde200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39636]
I0612 00:17:04.804717  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0612 00:17:04.813721  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.813815  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.813960  109237 wrap.go:47] GET /healthz: (1.220169ms) 500
goroutine 11332 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00860ac40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00860ac40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004073be0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000515fe0, 0xc003e023c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad300)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000515fe0, 0xc006dad300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0049317a0, 0xc00785dc40, 0x7423e60, 0xc000515fe0, 0xc006dad300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.823079  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.294775ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.843749  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.007931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.844085  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0612 00:17:04.862961  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.182997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.883654  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.843895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.884049  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0612 00:17:04.903220  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.434209ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:04.903702  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.903777  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.903929  109237 wrap.go:47] GET /healthz: (1.170676ms) 500
goroutine 11334 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00860b030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00860b030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003de68a0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc00057c760, 0xc00385da40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc00057c760, 0xc006dade00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc00057c760, 0xc006dadd00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc00057c760, 0xc006dadd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003f20060, 0xc00785dc40, 0x7423e60, 0xc00057c760, 0xc006dadd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:04.913986  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:04.914070  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:04.914265  109237 wrap.go:47] GET /healthz: (1.433982ms) 500
goroutine 11356 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008636a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008636a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d891a0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc004847230, 0xc003e02a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc004847230, 0xc007d86f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc004847230, 0xc007d86e00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc004847230, 0xc007d86e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003f6aae0, 0xc00785dc40, 0x7423e60, 0xc004847230, 0xc007d86e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.923499  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.788593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.923850  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0612 00:17:04.942976  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.255346ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.963995  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.155665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:04.964416  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0612 00:17:04.985503  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.884435ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.007454  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.007506  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.007670  109237 wrap.go:47] GET /healthz: (4.704405ms) 500
goroutine 11320 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008624700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008624700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003df5900, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c2be8, 0xc002635900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec200)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c2be8, 0xc0083ec200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004811ec0, 0xc00785dc40, 0x7423e60, 0xc0029c2be8, 0xc0083ec200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:05.009122  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.240485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.009393  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0612 00:17:05.013363  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.013391  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.013569  109237 wrap.go:47] GET /healthz: (906.867µs) 500
goroutine 11325 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086249a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086249a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c2a6c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c2cb0, 0xc003432a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed100)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c2cb0, 0xc0083ed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003862660, 0xc00785dc40, 0x7423e60, 0xc0029c2cb0, 0xc0083ed100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.022829  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.054147ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.044071  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.248713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.044337  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0612 00:17:05.063083  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.286532ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.090778  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.927166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.091057  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0612 00:17:05.103151  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.346778ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.104120  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.104146  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.104327  109237 wrap.go:47] GET /healthz: (1.543234ms) 500
goroutine 11341 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00860bd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00860bd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c072a0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc00057c8c8, 0xc0003d9040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1ce00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc00057c8c8, 0xc007e1ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001f90f60, 0xc00785dc40, 0x7423e60, 0xc00057c8c8, 0xc007e1ce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:05.114053  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.114085  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.114277  109237 wrap.go:47] GET /healthz: (1.517215ms) 500
goroutine 11343 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00860bea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00860bea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c07d40, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc00057c900, 0xc003433180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d300)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc00057c900, 0xc007e1d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001f91ce0, 0xc00785dc40, 0x7423e60, 0xc00057c900, 0xc007e1d300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.123647  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.900011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.123864  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0612 00:17:05.143105  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.292126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.164010  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.262168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.164376  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0612 00:17:05.183457  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.561612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.204001  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.153618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.204408  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.204433  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.204579  109237 wrap.go:47] GET /healthz: (1.787319ms) 500
goroutine 11397 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008670690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008670690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003bed7e0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c3068, 0xc0003d9540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8400)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c3068, 0xc0085e8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003863ec0, 0xc00785dc40, 0x7423e60, 0xc0029c3068, 0xc0085e8400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39636]
I0612 00:17:05.204409  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0612 00:17:05.213977  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.214012  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.214161  109237 wrap.go:47] GET /healthz: (1.40158ms) 500
goroutine 11427 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086662a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086662a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003bc9a20, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc00057ca38, 0xc0034337c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e000)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc00057ca38, 0xc00865e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001157b60, 0xc00785dc40, 0x7423e60, 0xc00057ca38, 0xc00865e000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.222818  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.083262ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.244054  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.228323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.244341  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0612 00:17:05.267410  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.277468ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.299377  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (17.574943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.299729  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0612 00:17:05.303297  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.599174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.304251  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.304271  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.304415  109237 wrap.go:47] GET /healthz: (1.543425ms) 500
goroutine 11442 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008641960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008641960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003b6f4c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f43be8, 0xc0003d9a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24b00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f43be8, 0xc005a24b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00243c000, 0xc00785dc40, 0x7423e60, 0xc003f43be8, 0xc005a24b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:05.313888  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.313923  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.314082  109237 wrap.go:47] GET /healthz: (1.350514ms) 500
goroutine 11444 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008641c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008641c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003b6ff60, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f43c30, 0xc003a6d680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25300)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f43c30, 0xc005a25300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00243c900, 0xc00785dc40, 0x7423e60, 0xc003f43c30, 0xc005a25300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.323991  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.214927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.324276  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0612 00:17:05.343380  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.397834ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.364220  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.428519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.364468  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0612 00:17:05.383145  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.341855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.403701  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.930774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.403968  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0612 00:17:05.404504  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.404529  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.404706  109237 wrap.go:47] GET /healthz: (981.89µs) 500
goroutine 11302 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085e4d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085e4d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004069c60, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002d5af80, 0xc001029680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002d5af80, 0xc007240e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002d5af80, 0xc007240d00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002d5af80, 0xc007240d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00447d020, 0xc00785dc40, 0x7423e60, 0xc002d5af80, 0xc007240d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39636]
I0612 00:17:05.413830  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.413867  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.414014  109237 wrap.go:47] GET /healthz: (1.232118ms) 500
goroutine 11304 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085e4e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085e4e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004069e40, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002d5af90, 0xc0044c37c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002d5af90, 0xc007241200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002d5af90, 0xc007241100)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002d5af90, 0xc007241100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00447d4a0, 0xc00785dc40, 0x7423e60, 0xc002d5af90, 0xc007241100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.423248  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.46476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.444149  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.284755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.444511  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0612 00:17:05.463242  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.389871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.484168  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.307013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.484735  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0612 00:17:05.508931  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.508967  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.509115  109237 wrap.go:47] GET /healthz: (1.409603ms) 500
goroutine 11481 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008687180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008687180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00399aa60, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c3520, 0xc002436a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c3520, 0xc009295c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c3520, 0xc009295b00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c3520, 0xc009295b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0048e2720, 0xc00785dc40, 0x7423e60, 0xc0029c3520, 0xc009295b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:05.509648  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.971361ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.513876  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.513907  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.514085  109237 wrap.go:47] GET /healthz: (1.306537ms) 500
goroutine 11448 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086947e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086947e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003a29420, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f43d18, 0xc001029e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f43d18, 0xc001338100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f43d18, 0xc001338000)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f43d18, 0xc001338000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00243d140, 0xc00785dc40, 0x7423e60, 0xc003f43d18, 0xc001338000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.524235  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.496097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.524605  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0612 00:17:05.543232  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.391803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.564844  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.028341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.565178  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0612 00:17:05.583389  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.518939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.603776  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.603805  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.603977  109237 wrap.go:47] GET /healthz: (1.004892ms) 500
goroutine 11487 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086a8070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086a8070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003931060, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c3660, 0xc003a6db80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c3660, 0xc001305900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c3660, 0xc001305800)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c3660, 0xc001305800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00494ff80, 0xc00785dc40, 0x7423e60, 0xc0029c3660, 0xc001305800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:05.604359  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.577182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.604541  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0612 00:17:05.613733  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.613770  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.613907  109237 wrap.go:47] GET /healthz: (1.126528ms) 500
goroutine 11453 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008695880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008695880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038dec20, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f43ea8, 0xc003c5c3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339300)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f43ea8, 0xc001339300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00243d920, 0xc00785dc40, 0x7423e60, 0xc003f43ea8, 0xc001339300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.622885  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.097043ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.651770  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.931934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.653144  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0612 00:17:05.662991  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.164394ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.683747  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.950381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.684016  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0612 00:17:05.703786  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.977932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.705949  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.705974  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.706124  109237 wrap.go:47] GET /healthz: (2.827496ms) 500
goroutine 11508 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086bc4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086bc4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038662c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc003f43fe0, 0xc003c5cb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630a00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc003f43fe0, 0xc002630a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00365af60, 0xc00785dc40, 0x7423e60, 0xc003f43fe0, 0xc002630a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:05.714809  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.714843  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.714995  109237 wrap.go:47] GET /healthz: (2.254388ms) 500
goroutine 11417 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00867b420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00867b420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00394c800, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0048477b8, 0xc003e03680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1c00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0048477b8, 0xc0083b1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00308b1a0, 0xc00785dc40, 0x7423e60, 0xc0048477b8, 0xc0083b1c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.724830  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.536537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.725094  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0612 00:17:05.742941  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.150067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.763677  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.90594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.763938  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0612 00:17:05.783550  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.767775ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.805062  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.231933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.805509  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0612 00:17:05.805636  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.805652  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.805791  109237 wrap.go:47] GET /healthz: (3.034426ms) 500
goroutine 11499 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086a9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086a9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038b3b20, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c3938, 0xc0044c3cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c3938, 0xc002611700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c3938, 0xc002611600)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c3938, 0xc002611600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0037ea0c0, 0xc00785dc40, 0x7423e60, 0xc0029c3938, 0xc002611600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39636]
I0612 00:17:05.818413  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.818449  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.818617  109237 wrap.go:47] GET /healthz: (5.623461ms) 500
goroutine 11421 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00867b7a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00867b7a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00394d1c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc004847860, 0xc003e03cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc004847860, 0xc0026e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc004847860, 0xc0026e4c00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc004847860, 0xc0026e4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0036e27e0, 0xc00785dc40, 0x7423e60, 0xc004847860, 0xc0026e4c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.822969  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.261646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.844415  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.563623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.844704  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0612 00:17:05.863084  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.2444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.884208  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.328041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.884477  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0612 00:17:05.903294  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.503185ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:05.903894  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.903916  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.904062  109237 wrap.go:47] GET /healthz: (1.220346ms) 500
goroutine 11436 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086677a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086677a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00206a980, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc00057d1f0, 0xc003c5d400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868700)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc00057d1f0, 0xc002868700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00392b320, 0xc00785dc40, 0x7423e60, 0xc00057d1f0, 0xc002868700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:05.913914  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:05.913948  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:05.914097  109237 wrap.go:47] GET /healthz: (1.203301ms) 500
goroutine 11525 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086c6a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086c6a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00380d1a0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc004847bb8, 0xc002437040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a100)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc004847bb8, 0xc006c5a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003b07800, 0xc00785dc40, 0x7423e60, 0xc004847bb8, 0xc006c5a100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.923587  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.834461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.923839  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0612 00:17:05.943017  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.262152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.964217  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.410579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:05.964480  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0612 00:17:05.983702  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.302292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.004301  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.472994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.004542  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0612 00:17:06.005329  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.005357  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.005506  109237 wrap.go:47] GET /healthz: (1.423978ms) 500
goroutine 11540 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086f22a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086f22a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0037fd360, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002b58fe0, 0xc006144280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028feb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028fea00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002b58fe0, 0xc0028fea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00393fc20, 0xc00785dc40, 0x7423e60, 0xc002b58fe0, 0xc0028fea00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39636]
I0612 00:17:06.013965  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.013997  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.014155  109237 wrap.go:47] GET /healthz: (1.116663ms) 500
goroutine 11556 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086fc230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086fc230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00206bea0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc00057d2f8, 0xc003bf23c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869800)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc00057d2f8, 0xc002869800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003c49260, 0xc00785dc40, 0x7423e60, 0xc00057d2f8, 0xc002869800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.022867  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.145941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.044052  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.259372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.044702  109237 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0612 00:17:06.062994  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.207649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.064902  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.491323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.084300  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.505797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.084548  109237 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0612 00:17:06.104109  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.104142  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.104312  109237 wrap.go:47] GET /healthz: (1.08071ms) 500
goroutine 11570 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008716380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008716380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001f8bbe0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c3ab0, 0xc0061448c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbe900)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c3ab0, 0xc002cbe900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0037ebda0, 0xc00785dc40, 0x7423e60, 0xc0029c3ab0, 0xc002cbe900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:06.104586  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.970144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.106460  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.374974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.114027  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.114055  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.114522  109237 wrap.go:47] GET /healthz: (1.77729ms) 500
goroutine 11572 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008716460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008716460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001faa260, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c3ac0, 0xc003bf2a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbed00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c3ac0, 0xc002cbed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004244540, 0xc00785dc40, 0x7423e60, 0xc0029c3ac0, 0xc002cbed00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.125780  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.182375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.126028  109237 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0612 00:17:06.142964  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.184563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.144804  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.394372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.164263  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.412746ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.164568  109237 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0612 00:17:06.183104  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.300246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.184996  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.248312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.204806  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.979217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.204971  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.205002  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.205142  109237 wrap.go:47] GET /healthz: (2.418077ms) 500
goroutine 11580 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008717110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008717110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000d3bc80, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c3ca0, 0xc003bf3180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98000)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c3ca0, 0xc002e98000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005669c80, 0xc00785dc40, 0x7423e60, 0xc0029c3ca0, 0xc002e98000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:06.205490  109237 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0612 00:17:06.213770  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.213800  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.213952  109237 wrap.go:47] GET /healthz: (1.223273ms) 500
goroutine 11582 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0087171f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0087171f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000d3bf00, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc0029c3cd0, 0xc006145040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98800)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc0029c3cd0, 0xc002e98800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0057a8d80, 0xc00785dc40, 0x7423e60, 0xc0029c3cd0, 0xc002e98800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.223343  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.625433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.225534  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.773635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.244847  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.04589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.245132  109237 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0612 00:17:06.263103  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.332849ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.264868  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.292506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.284273  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.434984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.284538  109237 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0612 00:17:06.303283  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.553147ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.305034  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.305062  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.305263  109237 wrap.go:47] GET /healthz: (2.4604ms) 500
goroutine 11520 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086df0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086df0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000af84c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000287290, 0xc005818780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000287290, 0xc002827200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000287290, 0xc002827100)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000287290, 0xc002827100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003c59560, 0xc00785dc40, 0x7423e60, 0xc000287290, 0xc002827100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:06.306049  109237 wrap.go:47] GET /api/v1/namespaces/kube-public: (2.344038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.323689  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.935055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.323982  109237 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0612 00:17:06.329517  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.329563  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.329740  109237 wrap.go:47] GET /healthz: (13.424865ms) 500
goroutine 11536 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008705030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008705030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001ddf660, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000216110, 0xc004338640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000216110, 0xc002d15500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000216110, 0xc002d15400)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000216110, 0xc002d15400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0045f2cc0, 0xc00785dc40, 0x7423e60, 0xc000216110, 0xc002d15400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.344736  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.080134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.347737  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.610527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.367304  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.470372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.367591  109237 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0612 00:17:06.382920  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.158184ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.384395  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.04146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.403977  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.404012  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.404163  109237 wrap.go:47] GET /healthz: (1.351133ms) 500
goroutine 11306 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085e5c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085e5c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00383a5e0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002d5b1b8, 0xc005818c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241a00)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002d5b1b8, 0xc007241a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0047bb0e0, 0xc00785dc40, 0x7423e60, 0xc002d5b1b8, 0xc007241a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39634]
I0612 00:17:06.404228  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.345954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.404463  109237 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0612 00:17:06.413918  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.413945  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.414105  109237 wrap.go:47] GET /healthz: (1.323445ms) 500
goroutine 11597 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0087351f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0087351f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001262dc0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc004a9a0b8, 0xc005819180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869700)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc004a9a0b8, 0xc003869700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00551b080, 0xc00785dc40, 0x7423e60, 0xc004a9a0b8, 0xc003869700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.423346  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.411879ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.427754  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.419015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.443967  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.089119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.444245  109237 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0612 00:17:06.463556  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.59234ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.473833  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (9.711181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.484070  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.26087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.484456  109237 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0612 00:17:06.503123  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.312258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.503788  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.503816  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.503987  109237 wrap.go:47] GET /healthz: (1.183251ms) 500
goroutine 11667 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086fddc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086fddc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0011910a0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc00057d5f8, 0xc002437a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce700)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc00057d5f8, 0xc0040ce700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00555d740, 0xc00785dc40, 0x7423e60, 0xc00057d5f8, 0xc0040ce700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39636]
I0612 00:17:06.505043  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.13942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.514317  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.514352  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.514539  109237 wrap.go:47] GET /healthz: (1.186258ms) 500
goroutine 11308 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008974460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008974460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00383b7c0, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc002d5b308, 0xc003bf3b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10500)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc002d5b308, 0xc005a10500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0047bbc20, 0xc00785dc40, 0x7423e60, 0xc002d5b308, 0xc005a10500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.524238  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.409998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.524496  109237 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0612 00:17:06.543631  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.75653ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.545714  109237 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.580441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.564090  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.225785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.564427  109237 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0612 00:17:06.583586  109237 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.865982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.586884  109237 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.79936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.604157  109237 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0612 00:17:06.604217  109237 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0612 00:17:06.604363  109237 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.528531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.604396  109237 wrap.go:47] GET /healthz: (1.41077ms) 500
goroutine 11686 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008ebf500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008ebf500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002994300, 0x1f4)
net/http.Error(0x7fd4ac3cf748, 0xc000216790, 0xc0024297c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
net/http.HandlerFunc.ServeHTTP(0xc004910320, 0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc003a8fd80, 0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007d71b90, 0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43bef1b, 0xe, 0xc0060f5a70, 0xc007d71b90, 0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
net/http.HandlerFunc.ServeHTTP(0xc0078f0880, 0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
net/http.HandlerFunc.ServeHTTP(0xc007564510, 0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
net/http.HandlerFunc.ServeHTTP(0xc0078f08c0, 0x7fd4ac3cf748, 0xc000216790, 0xc005d46700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4ac3cf748, 0xc000216790, 0xc005d46600)
net/http.HandlerFunc.ServeHTTP(0xc007dcc3c0, 0x7fd4ac3cf748, 0xc000216790, 0xc005d46600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002820a20, 0xc00785dc40, 0x7423e60, 0xc000216790, 0xc005d46600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:39636]
I0612 00:17:06.604627  109237 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0612 00:17:06.613877  109237 wrap.go:47] GET /healthz: (1.090503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.615675  109237 wrap.go:47] GET /api/v1/namespaces/default: (1.308272ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.618300  109237 wrap.go:47] POST /api/v1/namespaces: (1.944701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.619630  109237 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (957.831µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.623037  109237 wrap.go:47] POST /api/v1/namespaces/default/services: (3.041349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.624493  109237 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.028957ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.626520  109237 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.695911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.704070  109237 wrap.go:47] GET /healthz: (1.181481ms) 200 [Go-http-client/1.1 127.0.0.1:39636]
W0612 00:17:06.705289  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0612 00:17:06.705331  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0612 00:17:06.705341  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0612 00:17:06.705353  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0612 00:17:06.705364  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0612 00:17:06.705448  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0612 00:17:06.705479  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0612 00:17:06.705510  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0612 00:17:06.705534  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0612 00:17:06.705657  109237 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0612 00:17:06.705716  109237 factory.go:345] Creating scheduler from algorithm provider 'DefaultProvider'
I0612 00:17:06.705741  109237 factory.go:433] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0612 00:17:06.706375  109237 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706410  109237 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706429  109237 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706446  109237 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706528  109237 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706542  109237 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706712  109237 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706726  109237 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706789  109237 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706790  109237 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706810  109237 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706372  109237 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706832  109237 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706956  109237 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706375  109237 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706975  109237 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706801  109237 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.706968  109237 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.707787  109237 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (608.903µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
I0612 00:17:06.707801  109237 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (447.794µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39846]
I0612 00:17:06.707787  109237 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (288.347µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39858]
I0612 00:17:06.707944  109237 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (651.569µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:06.708134  109237 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (334.006µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39848]
I0612 00:17:06.708212  109237 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (322.945µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39856]
I0612 00:17:06.708643  109237 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (317.286µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39858]
I0612 00:17:06.708931  109237 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (619.159µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39854]
I0612 00:17:06.709061  109237 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=22390 labels= fields= timeout=7m3s
I0612 00:17:06.709386  109237 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=22389 labels= fields= timeout=8m14s
I0612 00:17:06.709216  109237 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=22389 labels= fields= timeout=5m12s
I0612 00:17:06.709666  109237 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=22390 labels= fields= timeout=5m21s
I0612 00:17:06.709679  109237 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=22390 labels= fields= timeout=8m30s
I0612 00:17:06.709905  109237 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (513.219µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39846]
I0612 00:17:06.709939  109237 get.go:250] Starting watch for /api/v1/pods, rv=22389 labels= fields= timeout=9m32s
I0612 00:17:06.709958  109237 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=22390 labels= fields= timeout=6m20s
I0612 00:17:06.710519  109237 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=22389 labels= fields= timeout=5m51s
I0612 00:17:06.710732  109237 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.710757  109237 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0612 00:17:06.711418  109237 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (398.927µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39864]
I0612 00:17:06.712010  109237 get.go:250] Starting watch for /api/v1/nodes, rv=22389 labels= fields= timeout=9m31s
I0612 00:17:06.713856  109237 get.go:250] Starting watch for /api/v1/services, rv=22720 labels= fields= timeout=9m26s
I0612 00:17:06.806314  109237 shared_informer.go:176] caches populated
I0612 00:17:06.906587  109237 shared_informer.go:176] caches populated
I0612 00:17:07.006823  109237 shared_informer.go:176] caches populated
I0612 00:17:07.107037  109237 shared_informer.go:176] caches populated
I0612 00:17:07.207312  109237 shared_informer.go:176] caches populated
I0612 00:17:07.307547  109237 shared_informer.go:176] caches populated
I0612 00:17:07.407758  109237 shared_informer.go:176] caches populated
I0612 00:17:07.509490  109237 shared_informer.go:176] caches populated
I0612 00:17:07.609708  109237 shared_informer.go:176] caches populated
I0612 00:17:07.708410  109237 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0612 00:17:07.710578  109237 shared_informer.go:176] caches populated
I0612 00:17:07.710767  109237 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0612 00:17:07.712355  109237 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0612 00:17:07.712400  109237 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0612 00:17:07.712437  109237 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0612 00:17:07.712512  109237 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0612 00:17:07.810891  109237 shared_informer.go:176] caches populated
I0612 00:17:07.814487  109237 wrap.go:47] POST /api/v1/nodes: (2.683194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.817050  109237 wrap.go:47] POST /api/v1/nodes: (1.829712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.825044  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods: (7.108878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.825604  109237 scheduling_queue.go:815] About to try and schedule pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.825633  109237 scheduler.go:456] Attempting to schedule pod: unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.825897  109237 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-0"
I0612 00:17:07.825928  109237 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0612 00:17:07.825970  109237 factory.go:727] Attempting to bind test-pod to test-node-0
I0612 00:17:07.828271  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod/binding: (1.986215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.828977  109237 scheduler.go:593] pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod is bound successfully on node test-node-0, 2 nodes evaluated, 2 nodes were found feasible
I0612 00:17:07.830972  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/events: (1.667462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.927946  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (1.997253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.935322  109237 wrap.go:47] DELETE /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (6.800465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.938697  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (1.265321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.945234  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods: (6.125402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.945685  109237 scheduling_queue.go:815] About to try and schedule pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.945700  109237 scheduler.go:456] Attempting to schedule pod: unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.945930  109237 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-1"
I0612 00:17:07.945944  109237 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-1": all PVCs bound and nothing to do
E0612 00:17:07.945985  109237 framework.go:202] error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod
E0612 00:17:07.946003  109237 factory.go:678] Error scheduling unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod: error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod; retrying
I0612 00:17:07.946032  109237 factory.go:736] Updating pod condition for unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0612 00:17:07.951065  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/events: (4.084579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0612 00:17:07.952978  109237 scheduling_queue.go:815] About to try and schedule pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.952999  109237 scheduler.go:456] Attempting to schedule pod: unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.953264  109237 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-0"
I0612 00:17:07.953290  109237 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-0": all PVCs bound and nothing to do
E0612 00:17:07.953340  109237 framework.go:202] error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod
E0612 00:17:07.953360  109237 factory.go:678] Error scheduling unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod: error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod; retrying
I0612 00:17:07.953390  109237 factory.go:736] Updating pod condition for unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0612 00:17:07.954846  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (7.829391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:07.958594  109237 wrap.go:47] PATCH /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/events/test-pod.15a74b55d64086f9: (4.412744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0612 00:17:07.959122  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (4.938442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39888]
I0612 00:17:07.959256  109237 wrap.go:47] PUT /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod/status: (12.296024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39878]
I0612 00:17:07.959318  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (3.141893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
E0612 00:17:07.962555  109237 factory.go:702] pod is already present in unschedulableQ
I0612 00:17:07.966154  109237 scheduling_queue.go:815] About to try and schedule pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.966451  109237 scheduler.go:452] Skip schedule deleting pod: unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.967570  109237 wrap.go:47] DELETE /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (5.228454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:07.968292  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/events: (1.574034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39888]
I0612 00:17:07.969991  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (976.723µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:07.972354  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods: (1.938447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:07.972641  109237 scheduling_queue.go:815] About to try and schedule pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.972665  109237 scheduler.go:456] Attempting to schedule pod: unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.972873  109237 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-1"
I0612 00:17:07.972888  109237 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-1": all PVCs bound and nothing to do
I0612 00:17:07.972937  109237 framework.go:198] rejected by prebind-plugin at prebind: reject pod test-pod
E0612 00:17:07.972952  109237 factory.go:678] Error scheduling unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod: rejected by prebind-plugin at prebind: reject pod test-pod; retrying
I0612 00:17:07.972980  109237 factory.go:736] Updating pod condition for unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod to (PodScheduled==False, Reason=Unschedulable)
I0612 00:17:07.975549  109237 wrap.go:47] PUT /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod/status: (1.950611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:07.975954  109237 scheduling_queue.go:815] About to try and schedule pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.975969  109237 scheduler.go:456] Attempting to schedule pod: unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:07.977040  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (1.735284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39890]
I0612 00:17:07.977210  109237 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-1"
I0612 00:17:07.977234  109237 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-1": all PVCs bound and nothing to do
I0612 00:17:07.977537  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/events: (1.949935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0612 00:17:07.977807  109237 framework.go:198] rejected by prebind-plugin at prebind: reject pod test-pod
E0612 00:17:07.977927  109237 factory.go:678] Error scheduling unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod: rejected by prebind-plugin at prebind: reject pod test-pod; retrying
I0612 00:17:07.978004  109237 factory.go:736] Updating pod condition for unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod to (PodScheduled==False, Reason=Unschedulable)
I0612 00:17:07.979369  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (1.145366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39890]
E0612 00:17:07.979597  109237 factory.go:702] pod is already present in unschedulableQ
I0612 00:17:07.981350  109237 wrap.go:47] PATCH /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/events/test-pod.15a74b55d7dbbb89: (2.920053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:08.074883  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (1.643326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:08.086048  109237 scheduling_queue.go:815] About to try and schedule pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:08.086096  109237 scheduler.go:452] Skip schedule deleting pod: unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:08.093161  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/events: (2.361836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39890]
I0612 00:17:08.096773  109237 wrap.go:47] DELETE /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (21.312873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:08.101134  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (1.251554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:08.103505  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods: (1.823939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:08.104181  109237 scheduling_queue.go:815] About to try and schedule pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:08.104227  109237 scheduler.go:456] Attempting to schedule pod: unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:08.104525  109237 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-1"
I0612 00:17:08.104544  109237 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod", node "test-node-1": all PVCs bound and nothing to do
E0612 00:17:08.104592  109237 framework.go:202] error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod
E0612 00:17:08.104610  109237 factory.go:678] Error scheduling unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod: error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod; retrying
I0612 00:17:08.104638  109237 factory.go:736] Updating pod condition for unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0612 00:17:08.110119  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (4.561647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39890]
I0612 00:17:08.111290  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/events: (3.23572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39906]
I0612 00:17:08.111612  109237 wrap.go:47] PUT /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod/status: (5.610115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0612 00:17:08.121835  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (1.558002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39906]
I0612 00:17:08.125123  109237 scheduling_queue.go:815] About to try and schedule pod unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:08.125172  109237 scheduler.go:452] Skip schedule deleting pod: unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod
I0612 00:17:08.127542  109237 wrap.go:47] POST /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/events: (2.013952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39890]
I0612 00:17:08.127577  109237 wrap.go:47] DELETE /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (5.201855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39906]
I0612 00:17:08.130011  109237 wrap.go:47] GET /api/v1/namespaces/unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/pods/test-pod: (975.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39906]
I0612 00:17:08.130929  109237 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=22390&timeout=8m30s&timeoutSeconds=510&watch=true: (1.421426643s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39858]
I0612 00:17:08.130939  109237 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=22389&timeout=8m14s&timeoutSeconds=494&watch=true: (1.421790178s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39848]
I0612 00:17:08.130952  109237 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=22390&timeout=5m21s&timeoutSeconds=321&watch=true: (1.421495299s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39866]
I0612 00:17:08.131017  109237 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=22389&timeout=5m51s&timeoutSeconds=351&watch=true: (1.420737747s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39868]
I0612 00:17:08.130955  109237 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=22390&timeout=6m20s&timeoutSeconds=380&watch=true: (1.421627283s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39634]
E0612 00:17:08.131076  109237 scheduling_queue.go:818] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0612 00:17:08.131084  109237 wrap.go:47] GET /api/v1/services?resourceVersion=22720&timeout=9m26s&timeoutSeconds=566&watch=true: (1.417475154s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39850]
I0612 00:17:08.131140  109237 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=22389&timeout=5m12s&timeoutSeconds=312&watch=true: (1.422293969s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39856]
I0612 00:17:08.131145  109237 wrap.go:47] GET /api/v1/nodes?resourceVersion=22389&timeout=9m31s&timeoutSeconds=571&watch=true: (1.419369352s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39864]
I0612 00:17:08.131181  109237 wrap.go:47] GET /api/v1/pods?resourceVersion=22389&timeout=9m32s&timeoutSeconds=572&watch=true: (1.421459685s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39636]
I0612 00:17:08.131294  109237 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=22390&timeout=7m3s&timeoutSeconds=423&watch=true: (1.422481371s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39860]
I0612 00:17:08.142633  109237 wrap.go:47] DELETE /api/v1/nodes: (12.200378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39906]
I0612 00:17:08.142914  109237 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0612 00:17:08.145671  109237 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.381453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39906]
I0612 00:17:08.148046  109237 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.913852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39906]
framework_test.go:466: test #1: Expected the unreserve plugin to be called 2 times, was called 1 times.
framework_test.go:474: test #2: Expected the unreserve plugin to be called 2 times, was called 3 times.
				from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190612-001133.xml

Find unreserve-plugineef38780-cfc6-43a1-a2a1-8a330d8ee5d2/test-pod mentions in log files | View test history on testgrid


Show 1692 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 307 lines ...
W0612 00:05:48.782] I0612 00:05:48.782251   48067 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0612 00:05:48.783] I0612 00:05:48.783329   48067 server.go:560] external host was not specified, using 172.17.0.2
W0612 00:05:48.784] W0612 00:05:48.783360   48067 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0612 00:05:48.784] I0612 00:05:48.784109   48067 server.go:147] Version: v1.16.0-alpha.0.920+a8b6cf287b0a73
W0612 00:05:49.270] I0612 00:05:49.269881   48067 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0612 00:05:49.270] I0612 00:05:49.269975   48067 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0612 00:05:49.271] E0612 00:05:49.270515   48067 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.271] E0612 00:05:49.271242   48067 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.272] E0612 00:05:49.271317   48067 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.272] E0612 00:05:49.271341   48067 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.272] E0612 00:05:49.271362   48067 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.273] E0612 00:05:49.271401   48067 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.273] E0612 00:05:49.271419   48067 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.273] E0612 00:05:49.271437   48067 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.273] E0612 00:05:49.271481   48067 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.274] E0612 00:05:49.271520   48067 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.274] E0612 00:05:49.271547   48067 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.274] E0612 00:05:49.271587   48067 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:49.274] I0612 00:05:49.271672   48067 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0612 00:05:49.275] I0612 00:05:49.271681   48067 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0612 00:05:49.275] I0612 00:05:49.273291   48067 client.go:354] parsed scheme: ""
W0612 00:05:49.275] I0612 00:05:49.273314   48067 client.go:354] scheme "" not registered, fallback to default scheme
W0612 00:05:49.275] I0612 00:05:49.273365   48067 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0612 00:05:49.275] I0612 00:05:49.273567   48067 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0612 00:05:49.879] W0612 00:05:49.878588   48067 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0612 00:05:50.269] I0612 00:05:50.268597   48067 client.go:354] parsed scheme: ""
W0612 00:05:50.269] I0612 00:05:50.268653   48067 client.go:354] scheme "" not registered, fallback to default scheme
W0612 00:05:50.269] I0612 00:05:50.268725   48067 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0612 00:05:50.270] I0612 00:05:50.268817   48067 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0612 00:05:50.270] I0612 00:05:50.269378   48067 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0612 00:05:50.781] E0612 00:05:50.780974   48067 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.782] E0612 00:05:50.781044   48067 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.782] E0612 00:05:50.781073   48067 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.783] E0612 00:05:50.781133   48067 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.783] E0612 00:05:50.781166   48067 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.783] E0612 00:05:50.781238   48067 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.784] E0612 00:05:50.781261   48067 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.784] E0612 00:05:50.781278   48067 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.785] E0612 00:05:50.781354   48067 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.785] E0612 00:05:50.781410   48067 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.786] E0612 00:05:50.781448   48067 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.786] E0612 00:05:50.781484   48067 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0612 00:05:50.787] I0612 00:05:50.781516   48067 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0612 00:05:50.787] I0612 00:05:50.781529   48067 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0612 00:05:50.787] I0612 00:05:50.783120   48067 client.go:354] parsed scheme: ""
W0612 00:05:50.788] I0612 00:05:50.783146   48067 client.go:354] scheme "" not registered, fallback to default scheme
W0612 00:05:50.788] I0612 00:05:50.783182   48067 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0612 00:05:50.789] I0612 00:05:50.783426   48067 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 98 lines ...
W0612 00:06:28.538] I0612 00:06:28.377996   51422 serviceaccounts_controller.go:117] Starting service account controller
W0612 00:06:28.539] I0612 00:06:28.378143   51422 controller_utils.go:1029] Waiting for caches to sync for service account controller
W0612 00:06:28.539] I0612 00:06:28.378872   51422 controllermanager.go:532] Started "disruption"
W0612 00:06:28.539] I0612 00:06:28.379196   51422 disruption.go:333] Starting disruption controller
W0612 00:06:28.539] I0612 00:06:28.379401   51422 controller_utils.go:1029] Waiting for caches to sync for disruption controller
W0612 00:06:28.539] I0612 00:06:28.379305   51422 node_lifecycle_controller.go:77] Sending events to api server
W0612 00:06:28.540] E0612 00:06:28.379846   51422 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0612 00:06:28.540] W0612 00:06:28.380593   51422 controllermanager.go:524] Skipping "cloud-node-lifecycle"
W0612 00:06:28.793] I0612 00:06:28.792607   51422 controllermanager.go:532] Started "garbagecollector"
W0612 00:06:28.793] I0612 00:06:28.793141   51422 garbagecollector.go:128] Starting garbage collector controller
W0612 00:06:28.794] I0612 00:06:28.793240   51422 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0612 00:06:28.794] I0612 00:06:28.793332   51422 graph_builder.go:280] GraphBuilder running
W0612 00:06:28.795] I0612 00:06:28.794833   51422 controllermanager.go:532] Started "daemonset"
... skipping 54 lines ...
W0612 00:06:28.818] I0612 00:06:28.818172   51422 controllermanager.go:532] Started "csrapproving"
W0612 00:06:28.818] I0612 00:06:28.818298   51422 certificate_controller.go:113] Starting certificate controller
W0612 00:06:28.818] I0612 00:06:28.818314   51422 controller_utils.go:1029] Waiting for caches to sync for certificate controller
W0612 00:06:28.819] I0612 00:06:28.818786   51422 controllermanager.go:532] Started "ttl"
W0612 00:06:28.819] I0612 00:06:28.818907   51422 ttl_controller.go:116] Starting TTL controller
W0612 00:06:28.819] I0612 00:06:28.818928   51422 controller_utils.go:1029] Waiting for caches to sync for TTL controller
W0612 00:06:28.819] E0612 00:06:28.819415   51422 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0612 00:06:28.819] W0612 00:06:28.819439   51422 controllermanager.go:524] Skipping "service"
W0612 00:06:28.820] I0612 00:06:28.819446   51422 core.go:170] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0612 00:06:28.820] W0612 00:06:28.819451   51422 controllermanager.go:524] Skipping "route"
W0612 00:06:28.820] I0612 00:06:28.820211   51422 controllermanager.go:532] Started "persistentvolume-expander"
W0612 00:06:28.820] I0612 00:06:28.820222   51422 expand_controller.go:300] Starting expand controller
W0612 00:06:28.821] W0612 00:06:28.820232   51422 controllermanager.go:524] Skipping "ttl-after-finished"
... skipping 11 lines ...
W0612 00:06:28.913] I0612 00:06:28.913555   51422 controller_utils.go:1036] Caches are synced for ReplicationController controller
W0612 00:06:28.914] I0612 00:06:28.914256   51422 controller_utils.go:1036] Caches are synced for job controller
W0612 00:06:28.915] I0612 00:06:28.915148   51422 controller_utils.go:1036] Caches are synced for ReplicaSet controller
W0612 00:06:28.917] I0612 00:06:28.916929   51422 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0612 00:06:28.918] I0612 00:06:28.918199   51422 controller_utils.go:1036] Caches are synced for HPA controller
W0612 00:06:28.918] I0612 00:06:28.918501   51422 controller_utils.go:1036] Caches are synced for certificate controller
W0612 00:06:28.939] E0612 00:06:28.939183   51422 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0612 00:06:28.958] E0612 00:06:28.957767   51422 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0612 00:06:29.001] I0612 00:06:29.001250   51422 controller_utils.go:1036] Caches are synced for PV protection controller
I0612 00:06:29.102] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
I0612 00:06:29.102] kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   33s
I0612 00:06:29.103] Recording: run_kubectl_version_tests
I0612 00:06:29.103] Running command: run_kubectl_version_tests
I0612 00:06:29.103] 
... skipping 10 lines ...
I0612 00:06:29.104]   "buildDate": "2019-06-12T00:04:52Z",
I0612 00:06:29.104]   "goVersion": "go1.12.1",
I0612 00:06:29.105]   "compiler": "gc",
I0612 00:06:29.105]   "platform": "linux/amd64"
I0612 00:06:29.236] }+++ [0612 00:06:29] Testing kubectl version: check client only output matches expected output
W0612 00:06:29.337] I0612 00:06:29.210484   51422 controller_utils.go:1036] Caches are synced for stateful set controller
W0612 00:06:29.373] W0612 00:06:29.373286   51422 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0612 00:06:29.411] I0612 00:06:29.411444   51422 controller_utils.go:1036] Caches are synced for attach detach controller
W0612 00:06:29.419] I0612 00:06:29.419122   51422 controller_utils.go:1036] Caches are synced for TTL controller
I0612 00:06:29.520] Successful: the flag '--client' shows correct client info
I0612 00:06:29.520] (BSuccessful: the flag '--client' correctly has no server version info
I0612 00:06:29.520] (B+++ [0612 00:06:29] Testing kubectl version: verify json output
I0612 00:06:29.665] Successful: --output json has correct client info
... skipping 68 lines ...
I0612 00:06:32.732] +++ working dir: /go/src/k8s.io/kubernetes
I0612 00:06:32.734] +++ command: run_RESTMapper_evaluation_tests
I0612 00:06:32.745] +++ [0612 00:06:32] Creating namespace namespace-1560297992-13800
I0612 00:06:32.817] namespace/namespace-1560297992-13800 created
I0612 00:06:32.881] Context "test" modified.
I0612 00:06:32.888] +++ [0612 00:06:32] Testing RESTMapper
I0612 00:06:32.986] +++ [0612 00:06:32] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0612 00:06:33.001] +++ exit code: 0
I0612 00:06:33.113] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0612 00:06:33.114] bindings                                                                      true         Binding
I0612 00:06:33.114] componentstatuses                 cs                                          false        ComponentStatus
I0612 00:06:33.114] configmaps                        cm                                          true         ConfigMap
I0612 00:06:33.114] endpoints                         ep                                          true         Endpoints
... skipping 652 lines ...
I0612 00:06:51.419] core.sh:223: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-secret
I0612 00:06:51.501] (Bcore.sh:224: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.type}}: test-type
I0612 00:06:51.592] (Bcore.sh:229: Successful get configmaps --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq $id_field \"test-configmap\" }}found{{end}}{{end}}:: :
I0612 00:06:51.662] (Bconfigmap/test-configmap created
I0612 00:06:51.747] core.sh:235: Successful get configmap/test-configmap --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-configmap
I0612 00:06:51.823] (Bpoddisruptionbudget.policy/test-pdb-1 created
W0612 00:06:51.924] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0612 00:06:51.924] error: setting 'all' parameter but found a non empty selector. 
W0612 00:06:51.925] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0612 00:06:51.925] I0612 00:06:51.819911   48067 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
I0612 00:06:52.026] core.sh:241: Successful get pdb/test-pdb-1 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 2
I0612 00:06:52.026] (Bpoddisruptionbudget.policy/test-pdb-2 created
I0612 00:06:52.094] core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
I0612 00:06:52.166] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0612 00:06:52.255] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0612 00:06:52.328] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0612 00:06:52.417] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0612 00:06:52.568] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:06:52.767] (Bpod/env-test-pod created
W0612 00:06:52.867] error: min-available and max-unavailable cannot be both specified
I0612 00:06:52.968] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0612 00:06:52.968] Name:         env-test-pod
I0612 00:06:52.969] Namespace:    test-kubectl-describe-pod
I0612 00:06:52.969] Priority:     0
I0612 00:06:52.969] Node:         <none>
I0612 00:06:52.969] Labels:       <none>
... skipping 142 lines ...
W0612 00:07:04.937] I0612 00:07:03.784848   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298019-15549", Name:"modified", UID:"823f50ee-8b3c-4134-9894-ae470c19bec3", APIVersion:"v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-bg57x
W0612 00:07:04.938] I0612 00:07:04.503081   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298019-15549", Name:"modified", UID:"9b1bef2c-c956-479e-b706-fd21cc22bea4", APIVersion:"v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-zm4kh
I0612 00:07:05.116] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:05.291] (Bpod/valid-pod created
I0612 00:07:05.391] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0612 00:07:05.546] (BSuccessful
I0612 00:07:05.546] message:Error from server: cannot restore map from string
I0612 00:07:05.546] has:cannot restore map from string
I0612 00:07:05.627] Successful
I0612 00:07:05.627] message:pod/valid-pod patched (no change)
I0612 00:07:05.628] has:patched (no change)
I0612 00:07:05.715] pod/valid-pod patched
I0612 00:07:05.807] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0612 00:07:06.316] (Bpod/valid-pod patched
I0612 00:07:06.409] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0612 00:07:06.480] (Bpod/valid-pod patched
I0612 00:07:06.569] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0612 00:07:06.728] (Bpod/valid-pod patched
I0612 00:07:06.824] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0612 00:07:06.995] (B+++ [0612 00:07:06] "kubectl patch with resourceVersion 497" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0612 00:07:07.096] E0612 00:07:05.537489   48067 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0612 00:07:07.245] pod "valid-pod" deleted
I0612 00:07:07.256] pod/valid-pod replaced
I0612 00:07:07.358] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0612 00:07:07.524] (BSuccessful
I0612 00:07:07.524] message:error: --grace-period must have --force specified
I0612 00:07:07.524] has:\-\-grace-period must have \-\-force specified
I0612 00:07:07.704] Successful
I0612 00:07:07.705] message:error: --timeout must have --force specified
I0612 00:07:07.705] has:\-\-timeout must have \-\-force specified
W0612 00:07:07.883] W0612 00:07:07.883136   51422 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0612 00:07:07.984] node/node-v1-test created
I0612 00:07:08.064] node/node-v1-test replaced
I0612 00:07:08.162] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0612 00:07:08.241] (Bnode "node-v1-test" deleted
I0612 00:07:08.340] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0612 00:07:08.649] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 58 lines ...
I0612 00:07:13.897] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:14.056] (Bpod/test-pod created
W0612 00:07:14.156] Edit cancelled, no changes made.
W0612 00:07:14.157] Edit cancelled, no changes made.
W0612 00:07:14.157] Edit cancelled, no changes made.
W0612 00:07:14.157] Edit cancelled, no changes made.
W0612 00:07:14.158] error: 'name' already has a value (valid-pod), and --overwrite is false
W0612 00:07:14.158] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0612 00:07:14.158] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0612 00:07:14.259] pod "test-pod" deleted
I0612 00:07:14.259] +++ [0612 00:07:14] Creating namespace namespace-1560298034-8839
I0612 00:07:14.298] namespace/namespace-1560298034-8839 created
I0612 00:07:14.367] Context "test" modified.
... skipping 41 lines ...
I0612 00:07:17.418] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0612 00:07:17.420] +++ working dir: /go/src/k8s.io/kubernetes
I0612 00:07:17.421] +++ command: run_kubectl_create_error_tests
I0612 00:07:17.431] +++ [0612 00:07:17] Creating namespace namespace-1560298037-31329
I0612 00:07:17.498] namespace/namespace-1560298037-31329 created
I0612 00:07:17.563] Context "test" modified.
I0612 00:07:17.569] +++ [0612 00:07:17] Testing kubectl create with error
W0612 00:07:17.670] Error: must specify one of -f and -k
W0612 00:07:17.670] 
W0612 00:07:17.670] Create a resource from a file or from stdin.
W0612 00:07:17.671] 
W0612 00:07:17.671]  JSON and YAML formats are accepted.
W0612 00:07:17.671] 
W0612 00:07:17.671] Examples:
... skipping 41 lines ...
W0612 00:07:17.678] 
W0612 00:07:17.679] Usage:
W0612 00:07:17.679]   kubectl create -f FILENAME [options]
W0612 00:07:17.679] 
W0612 00:07:17.679] Use "kubectl <command> --help" for more information about a given command.
W0612 00:07:17.679] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0612 00:07:17.824] +++ [0612 00:07:17] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0612 00:07:17.925] kubectl convert is DEPRECATED and will be removed in a future version.
W0612 00:07:17.925] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0612 00:07:18.025] +++ exit code: 0
I0612 00:07:18.027] Recording: run_kubectl_apply_tests
I0612 00:07:18.027] Running command: run_kubectl_apply_tests
I0612 00:07:18.044] 
... skipping 19 lines ...
W0612 00:07:20.263] I0612 00:07:20.262495   48067 client.go:354] parsed scheme: ""
W0612 00:07:20.263] I0612 00:07:20.262633   48067 client.go:354] scheme "" not registered, fallback to default scheme
W0612 00:07:20.263] I0612 00:07:20.262716   48067 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0612 00:07:20.264] I0612 00:07:20.262795   48067 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0612 00:07:20.264] I0612 00:07:20.263203   48067 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0612 00:07:20.266] I0612 00:07:20.266129   48067 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0612 00:07:20.358] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0612 00:07:20.459] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0612 00:07:20.459] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0612 00:07:20.469] +++ exit code: 0
I0612 00:07:20.503] Recording: run_kubectl_run_tests
I0612 00:07:20.504] Running command: run_kubectl_run_tests
I0612 00:07:20.522] 
... skipping 91 lines ...
I0612 00:07:22.871] Context "test" modified.
I0612 00:07:22.878] +++ [0612 00:07:22] Testing kubectl create filter
I0612 00:07:22.960] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:23.123] (Bpod/selector-test-pod created
I0612 00:07:23.218] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0612 00:07:23.299] (BSuccessful
I0612 00:07:23.300] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0612 00:07:23.300] has:pods "selector-test-pod-dont-apply" not found
I0612 00:07:23.372] pod "selector-test-pod" deleted
I0612 00:07:23.389] +++ exit code: 0
I0612 00:07:23.416] Recording: run_kubectl_apply_deployments_tests
I0612 00:07:23.417] Running command: run_kubectl_apply_deployments_tests
I0612 00:07:23.433] 
... skipping 38 lines ...
I0612 00:07:25.398] (Bapps.sh:142: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:25.492] (Bapps.sh:143: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:25.580] (Bapps.sh:147: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:25.751] (Bdeployment.extensions/nginx created
I0612 00:07:25.849] apps.sh:151: Successful get deployment nginx {{.metadata.name}}: nginx
I0612 00:07:30.103] (BSuccessful
I0612 00:07:30.104] message:Error from server (Conflict): error when applying patch:
I0612 00:07:30.104] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1560298043-20561\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0612 00:07:30.104] to:
I0612 00:07:30.104] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0612 00:07:30.104] Name: "nginx", Namespace: "namespace-1560298043-20561"
I0612 00:07:30.107] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1560298043-20561\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-06-12T00:07:25Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-06-12T00:07:25Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-06-12T00:07:25Z"]] "name":"nginx" "namespace":"namespace-1560298043-20561" "resourceVersion":"603" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1560298043-20561/deployments/nginx" "uid":"85de1d18-1a0b-4d70-a6bc-96082dd9729f"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-06-12T00:07:25Z" "lastUpdateTime":"2019-06-12T00:07:25Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0612 00:07:30.107] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0612 00:07:30.107] has:Error from server (Conflict)
W0612 00:07:30.208] I0612 00:07:25.757506   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298043-20561", Name:"nginx", UID:"85de1d18-1a0b-4d70-a6bc-96082dd9729f", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-54d5cbd75f to 3
W0612 00:07:30.209] I0612 00:07:25.761276   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298043-20561", Name:"nginx-54d5cbd75f", UID:"d9b1dd87-828c-4e3f-be1c-96b72c2b1a91", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-54d5cbd75f-bczk8
W0612 00:07:30.209] I0612 00:07:25.766554   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298043-20561", Name:"nginx-54d5cbd75f", UID:"d9b1dd87-828c-4e3f-be1c-96b72c2b1a91", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-54d5cbd75f-68ctd
W0612 00:07:30.210] I0612 00:07:25.767341   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298043-20561", Name:"nginx-54d5cbd75f", UID:"d9b1dd87-828c-4e3f-be1c-96b72c2b1a91", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-54d5cbd75f-6vdkn
W0612 00:07:31.885] I0612 00:07:31.884724   51422 horizontal.go:340] Horizontal Pod Autoscaler frontend has been deleted in namespace-1560298035-30470
I0612 00:07:35.366] deployment.extensions/nginx configured
... skipping 174 lines ...
I0612 00:07:42.734] +++ [0612 00:07:42] Creating namespace namespace-1560298062-8918
I0612 00:07:42.805] namespace/namespace-1560298062-8918 created
I0612 00:07:42.872] Context "test" modified.
I0612 00:07:42.879] +++ [0612 00:07:42] Testing kubectl get
I0612 00:07:42.962] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:43.046] (BSuccessful
I0612 00:07:43.047] message:Error from server (NotFound): pods "abc" not found
I0612 00:07:43.047] has:pods "abc" not found
I0612 00:07:43.130] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:43.211] (BSuccessful
I0612 00:07:43.212] message:Error from server (NotFound): pods "abc" not found
I0612 00:07:43.212] has:pods "abc" not found
I0612 00:07:43.296] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:43.379] (BSuccessful
I0612 00:07:43.380] message:{
I0612 00:07:43.380]     "apiVersion": "v1",
I0612 00:07:43.380]     "items": [],
... skipping 23 lines ...
I0612 00:07:43.694] has not:No resources found
I0612 00:07:43.773] Successful
I0612 00:07:43.774] message:NAME
I0612 00:07:43.774] has not:No resources found
I0612 00:07:43.858] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:43.951] (BSuccessful
I0612 00:07:43.952] message:error: the server doesn't have a resource type "foobar"
I0612 00:07:43.952] has not:No resources found
I0612 00:07:44.025] Successful
I0612 00:07:44.025] message:No resources found.
I0612 00:07:44.025] has:No resources found
I0612 00:07:44.098] Successful
I0612 00:07:44.098] message:
I0612 00:07:44.098] has not:No resources found
I0612 00:07:44.173] Successful
I0612 00:07:44.173] message:No resources found.
I0612 00:07:44.174] has:No resources found
I0612 00:07:44.250] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:44.327] (BSuccessful
I0612 00:07:44.328] message:Error from server (NotFound): pods "abc" not found
I0612 00:07:44.328] has:pods "abc" not found
I0612 00:07:44.329] FAIL!
I0612 00:07:44.329] message:Error from server (NotFound): pods "abc" not found
I0612 00:07:44.330] has not:List
I0612 00:07:44.330] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0612 00:07:44.455] Successful
I0612 00:07:44.456] message:I0612 00:07:44.399883   62078 loader.go:359] Config loaded from file:  /tmp/tmp.7rAceWMmMn/.kube/config
I0612 00:07:44.456] I0612 00:07:44.401910   62078 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0612 00:07:44.456] I0612 00:07:44.428718   62078 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 888 lines ...
I0612 00:07:49.979] Successful
I0612 00:07:49.979] message:NAME    DATA   AGE
I0612 00:07:49.979] one     0      0s
I0612 00:07:49.979] three   0      0s
I0612 00:07:49.979] two     0      0s
I0612 00:07:49.979] STATUS    REASON          MESSAGE
I0612 00:07:49.980] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0612 00:07:49.980] has not:watch is only supported on individual resources
I0612 00:07:51.063] Successful
I0612 00:07:51.063] message:STATUS    REASON          MESSAGE
I0612 00:07:51.063] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0612 00:07:51.063] has not:watch is only supported on individual resources
I0612 00:07:51.069] +++ [0612 00:07:51] Creating namespace namespace-1560298071-8661
I0612 00:07:51.139] namespace/namespace-1560298071-8661 created
I0612 00:07:51.205] Context "test" modified.
I0612 00:07:51.295] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:51.459] (Bpod/valid-pod created
... skipping 104 lines ...
I0612 00:07:51.552] }
I0612 00:07:51.641] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0612 00:07:51.879] (B<no value>Successful
I0612 00:07:51.879] message:valid-pod:
I0612 00:07:51.879] has:valid-pod:
I0612 00:07:51.956] Successful
I0612 00:07:51.957] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0612 00:07:51.957] 	template was:
I0612 00:07:51.957] 		{.missing}
I0612 00:07:51.957] 	object given to jsonpath engine was:
I0612 00:07:51.958] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-06-12T00:07:51Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-06-12T00:07:51Z"}}, "name":"valid-pod", "namespace":"namespace-1560298071-8661", "resourceVersion":"706", "selfLink":"/api/v1/namespaces/namespace-1560298071-8661/pods/valid-pod", "uid":"4697cce9-6f97-4267-a218-41068c86c2fc"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0612 00:07:51.959] has:missing is not found
I0612 00:07:52.038] Successful
I0612 00:07:52.038] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0612 00:07:52.039] 	template was:
I0612 00:07:52.039] 		{{.missing}}
I0612 00:07:52.039] 	raw data was:
I0612 00:07:52.040] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-06-12T00:07:51Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-06-12T00:07:51Z"}],"name":"valid-pod","namespace":"namespace-1560298071-8661","resourceVersion":"706","selfLink":"/api/v1/namespaces/namespace-1560298071-8661/pods/valid-pod","uid":"4697cce9-6f97-4267-a218-41068c86c2fc"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0612 00:07:52.040] 	object given to template engine was:
I0612 00:07:52.041] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-06-12T00:07:51Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-06-12T00:07:51Z]] name:valid-pod namespace:namespace-1560298071-8661 resourceVersion:706 selfLink:/api/v1/namespaces/namespace-1560298071-8661/pods/valid-pod uid:4697cce9-6f97-4267-a218-41068c86c2fc] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0612 00:07:52.042] has:map has no entry for key "missing"
W0612 00:07:52.142] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0612 00:07:53.119] Successful
I0612 00:07:53.120] message:NAME        READY   STATUS    RESTARTS   AGE
I0612 00:07:53.120] valid-pod   0/1     Pending   0          1s
I0612 00:07:53.120] STATUS      REASON          MESSAGE
I0612 00:07:53.120] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0612 00:07:53.120] has:STATUS
I0612 00:07:53.122] Successful
I0612 00:07:53.122] message:NAME        READY   STATUS    RESTARTS   AGE
I0612 00:07:53.122] valid-pod   0/1     Pending   0          1s
I0612 00:07:53.122] STATUS      REASON          MESSAGE
I0612 00:07:53.122] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0612 00:07:53.123] has:valid-pod
I0612 00:07:54.200] Successful
I0612 00:07:54.201] message:pod/valid-pod
I0612 00:07:54.201] has not:STATUS
I0612 00:07:54.202] Successful
I0612 00:07:54.202] message:pod/valid-pod
... skipping 142 lines ...
I0612 00:07:55.293]   terminationGracePeriodSeconds: 30
I0612 00:07:55.293] status:
I0612 00:07:55.293]   phase: Pending
I0612 00:07:55.293]   qosClass: Guaranteed
I0612 00:07:55.294] has:name: valid-pod
I0612 00:07:55.368] Successful
I0612 00:07:55.368] message:Error from server (NotFound): pods "invalid-pod" not found
I0612 00:07:55.368] has:"invalid-pod" not found
I0612 00:07:55.443] pod "valid-pod" deleted
I0612 00:07:55.538] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:07:55.705] (Bpod/redis-master created
I0612 00:07:55.710] pod/valid-pod created
I0612 00:07:55.801] Successful
... skipping 286 lines ...
I0612 00:08:01.511] +++ command: run_kubectl_exec_pod_tests
I0612 00:08:01.521] +++ [0612 00:08:01] Creating namespace namespace-1560298081-11306
I0612 00:08:01.588] namespace/namespace-1560298081-11306 created
I0612 00:08:01.655] Context "test" modified.
I0612 00:08:01.661] +++ [0612 00:08:01] Testing kubectl exec POD COMMAND
I0612 00:08:01.740] Successful
I0612 00:08:01.740] message:Error from server (NotFound): pods "abc" not found
I0612 00:08:01.740] has:pods "abc" not found
W0612 00:08:01.841] E0612 00:08:01.242079   51422 garbagecollector.go:217] failed to sync resource monitors (attempt 1): couldn't look up resource "company.com/v1, Resource=foos": no matches for company.com/v1, Resource=foos
W0612 00:08:01.913] W0612 00:08:01.912468   48067 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0612 00:08:01.914] E0612 00:08:01.913851   51422 reflector.go:283] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to watch *unstructured.Unstructured: the server could not find the requested resource
I0612 00:08:02.014] pod/test-pod created
I0612 00:08:02.037] Successful
I0612 00:08:02.037] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0612 00:08:02.038] has not:pods "test-pod" not found
I0612 00:08:02.039] Successful
I0612 00:08:02.039] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0612 00:08:02.039] has not:pod or type/name must be specified
I0612 00:08:02.111] pod "test-pod" deleted
I0612 00:08:02.127] +++ exit code: 0
I0612 00:08:02.156] Recording: run_kubectl_exec_resource_name_tests
I0612 00:08:02.156] Running command: run_kubectl_exec_resource_name_tests
I0612 00:08:02.173] 
... skipping 2 lines ...
I0612 00:08:02.178] +++ command: run_kubectl_exec_resource_name_tests
I0612 00:08:02.189] +++ [0612 00:08:02] Creating namespace namespace-1560298082-14952
I0612 00:08:02.254] namespace/namespace-1560298082-14952 created
I0612 00:08:02.320] Context "test" modified.
I0612 00:08:02.327] +++ [0612 00:08:02] Testing kubectl exec TYPE/NAME COMMAND
I0612 00:08:02.416] Successful
I0612 00:08:02.417] message:error: the server doesn't have a resource type "foo"
I0612 00:08:02.417] has:error:
I0612 00:08:02.500] Successful
I0612 00:08:02.500] message:Error from server (NotFound): deployments.extensions "bar" not found
I0612 00:08:02.500] has:"bar" not found
I0612 00:08:02.662] pod/test-pod created
W0612 00:08:02.763] I0612 00:08:02.740145   51422 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0612 00:08:02.841] I0612 00:08:02.840499   51422 controller_utils.go:1036] Caches are synced for garbage collector controller
W0612 00:08:02.872] I0612 00:08:02.872008   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298082-14952", Name:"frontend", UID:"7c17ef40-1299-42b4-b8c5-a6fd3d27a42a", APIVersion:"apps/v1", ResourceVersion:"806", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r47bw
W0612 00:08:02.877] I0612 00:08:02.876379   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298082-14952", Name:"frontend", UID:"7c17ef40-1299-42b4-b8c5-a6fd3d27a42a", APIVersion:"apps/v1", ResourceVersion:"806", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hts4w
W0612 00:08:02.878] I0612 00:08:02.878397   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298082-14952", Name:"frontend", UID:"7c17ef40-1299-42b4-b8c5-a6fd3d27a42a", APIVersion:"apps/v1", ResourceVersion:"806", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xt5n4
W0612 00:08:02.915] E0612 00:08:02.915113   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:08:03.016] replicaset.apps/frontend created
I0612 00:08:03.054] configmap/test-set-env-config created
I0612 00:08:03.143] Successful
I0612 00:08:03.144] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0612 00:08:03.144] has:not implemented
I0612 00:08:03.227] Successful
I0612 00:08:03.227] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0612 00:08:03.228] has not:not found
I0612 00:08:03.229] Successful
I0612 00:08:03.229] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0612 00:08:03.229] has not:pod or type/name must be specified
I0612 00:08:03.324] Successful
I0612 00:08:03.324] message:Error from server (BadRequest): pod frontend-hts4w does not have a host assigned
I0612 00:08:03.324] has not:not found
I0612 00:08:03.325] Successful
I0612 00:08:03.326] message:Error from server (BadRequest): pod frontend-hts4w does not have a host assigned
I0612 00:08:03.326] has not:pod or type/name must be specified
I0612 00:08:03.398] pod "test-pod" deleted
I0612 00:08:03.476] replicaset.extensions "frontend" deleted
I0612 00:08:03.555] configmap "test-set-env-config" deleted
I0612 00:08:03.573] +++ exit code: 0
I0612 00:08:03.607] Recording: run_create_secret_tests
I0612 00:08:03.607] Running command: run_create_secret_tests
I0612 00:08:03.627] 
I0612 00:08:03.629] +++ Running case: test-cmd.run_create_secret_tests 
I0612 00:08:03.632] +++ working dir: /go/src/k8s.io/kubernetes
I0612 00:08:03.633] +++ command: run_create_secret_tests
I0612 00:08:03.723] Successful
I0612 00:08:03.723] message:Error from server (NotFound): secrets "mysecret" not found
I0612 00:08:03.723] has:secrets "mysecret" not found
I0612 00:08:03.869] Successful
I0612 00:08:03.869] message:Error from server (NotFound): secrets "mysecret" not found
I0612 00:08:03.869] has:secrets "mysecret" not found
I0612 00:08:03.871] Successful
I0612 00:08:03.871] message:user-specified
I0612 00:08:03.871] has:user-specified
I0612 00:08:03.937] Successful
I0612 00:08:04.006] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"72c957dd-143e-4461-8a7b-c8d9353b5ac7","resourceVersion":"841","creationTimestamp":"2019-06-12T00:08:04Z"}}
... skipping 156 lines ...
I0612 00:08:06.557] }
I0612 00:08:06.632] request-timeout.sh:34: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0612 00:08:06.712] (BSuccessful
I0612 00:08:06.712] message:NAME        READY   STATUS    RESTARTS   AGE
I0612 00:08:06.712] valid-pod   0/1     Pending   0          0s
I0612 00:08:06.712] has:valid-pod
W0612 00:08:06.813] E0612 00:08:03.916441   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:06.813] I0612 00:08:04.648601   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298082-14952", Name:"test-the-deployment", UID:"a9b0bd23-2029-45c4-ae11-d2ba59d971a1", APIVersion:"apps/v1", ResourceVersion:"849", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-the-deployment-5f6d7c99fd to 3
W0612 00:08:06.814] I0612 00:08:04.653030   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298082-14952", Name:"test-the-deployment-5f6d7c99fd", UID:"60604a0f-a217-45dd-be2f-91375cb408aa", APIVersion:"apps/v1", ResourceVersion:"850", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-5f6d7c99fd-vksg4
W0612 00:08:06.814] I0612 00:08:04.658989   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298082-14952", Name:"test-the-deployment-5f6d7c99fd", UID:"60604a0f-a217-45dd-be2f-91375cb408aa", APIVersion:"apps/v1", ResourceVersion:"850", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-5f6d7c99fd-6vv79
W0612 00:08:06.815] I0612 00:08:04.661358   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298082-14952", Name:"test-the-deployment-5f6d7c99fd", UID:"60604a0f-a217-45dd-be2f-91375cb408aa", APIVersion:"apps/v1", ResourceVersion:"850", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-5f6d7c99fd-48b6d
W0612 00:08:06.815] E0612 00:08:04.917745   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:06.815] E0612 00:08:05.919056   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:06.921] E0612 00:08:06.920374   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:08:07.788] Successful
I0612 00:08:07.788] message:NAME        READY   STATUS    RESTARTS   AGE
I0612 00:08:07.789] valid-pod   0/1     Pending   0          0s
I0612 00:08:07.789] STATUS      REASON          MESSAGE
I0612 00:08:07.789] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0612 00:08:07.789] has:Timeout exceeded while reading body
I0612 00:08:07.872] Successful
I0612 00:08:07.873] message:NAME        READY   STATUS    RESTARTS   AGE
I0612 00:08:07.873] valid-pod   0/1     Pending   0          1s
I0612 00:08:07.873] has:valid-pod
I0612 00:08:07.949] Successful
I0612 00:08:07.950] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0612 00:08:07.950] has:Invalid timeout value
I0612 00:08:08.028] pod "valid-pod" deleted
I0612 00:08:08.047] +++ exit code: 0
I0612 00:08:08.078] Recording: run_crd_tests
I0612 00:08:08.078] Running command: run_crd_tests
I0612 00:08:08.095] 
I0612 00:08:08.097] +++ Running case: test-cmd.run_crd_tests 
I0612 00:08:08.099] +++ working dir: /go/src/k8s.io/kubernetes
I0612 00:08:08.102] +++ command: run_crd_tests
I0612 00:08:08.114] +++ [0612 00:08:08] Creating namespace namespace-1560298088-20915
I0612 00:08:08.188] namespace/namespace-1560298088-20915 created
I0612 00:08:08.255] Context "test" modified.
I0612 00:08:08.262] +++ [0612 00:08:08] Testing kubectl crd
W0612 00:08:08.363] E0612 00:08:07.921820   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:08:08.485] customresourcedefinition.apiextensions.k8s.io/foos.company.com created
I0612 00:08:08.583] crd.sh:47: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: foos.company.com:
I0612 00:08:08.760] (Bcustomresourcedefinition.apiextensions.k8s.io/bars.company.com created
I0612 00:08:08.860] crd.sh:69: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:
I0612 00:08:09.043] (Bcustomresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0612 00:08:09.150] crd.sh:96: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\" \"resources.mygroup.example.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:resources.mygroup.example.com:
... skipping 224 lines ...
I0612 00:08:12.599] foo.company.com/test patched
I0612 00:08:12.684] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0612 00:08:12.761] (Bfoo.company.com/test patched
I0612 00:08:12.847] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0612 00:08:12.925] (Bfoo.company.com/test patched
I0612 00:08:13.015] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0612 00:08:13.161] (B+++ [0612 00:08:13] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0612 00:08:13.221] {
I0612 00:08:13.221]     "apiVersion": "company.com/v1",
I0612 00:08:13.221]     "kind": "Foo",
I0612 00:08:13.222]     "metadata": {
I0612 00:08:13.222]         "annotations": {
I0612 00:08:13.222]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 319 lines ...
I0612 00:08:39.678] crd.sh:456: Successful get bars {{len .items}}: 1
I0612 00:08:39.754] (Bnamespace "non-native-resources" deleted
I0612 00:08:44.949] crd.sh:459: Successful get bars {{len .items}}: 0
I0612 00:08:45.104] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0612 00:08:45.197] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0612 00:08:45.291] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
W0612 00:08:45.392] Error from server (NotFound): namespaces "non-native-resources" not found
I0612 00:08:45.493] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0612 00:08:45.493] +++ exit code: 0
I0612 00:08:45.493] Recording: run_cmd_with_img_tests
I0612 00:08:45.493] Running command: run_cmd_with_img_tests
I0612 00:08:45.493] 
I0612 00:08:45.493] +++ Running case: test-cmd.run_cmd_with_img_tests 
... skipping 5 lines ...
I0612 00:08:45.640] +++ [0612 00:08:45] Testing cmd with image
I0612 00:08:45.732] Successful
I0612 00:08:45.732] message:deployment.apps/test1 created
I0612 00:08:45.732] has:deployment.apps/test1 created
I0612 00:08:45.809] deployment.extensions "test1" deleted
I0612 00:08:45.889] Successful
I0612 00:08:45.889] message:error: Invalid image name "InvalidImageName": invalid reference format
I0612 00:08:45.890] has:error: Invalid image name "InvalidImageName": invalid reference format
I0612 00:08:45.901] +++ exit code: 0
I0612 00:08:45.954] +++ [0612 00:08:45] Testing recursive resources
I0612 00:08:45.959] +++ [0612 00:08:45] Creating namespace namespace-1560298125-19389
I0612 00:08:46.033] namespace/namespace-1560298125-19389 created
I0612 00:08:46.102] Context "test" modified.
I0612 00:08:46.191] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:08:46.471] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:46.473] (BSuccessful
I0612 00:08:46.473] message:pod/busybox0 created
I0612 00:08:46.473] pod/busybox1 created
I0612 00:08:46.474] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0612 00:08:46.474] has:error validating data: kind not set
I0612 00:08:46.557] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:46.718] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0612 00:08:46.720] (BSuccessful
I0612 00:08:46.721] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0612 00:08:46.721] has:Object 'Kind' is missing
I0612 00:08:46.806] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:47.125] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0612 00:08:47.127] (BSuccessful
I0612 00:08:47.128] message:pod/busybox0 replaced
I0612 00:08:47.128] pod/busybox1 replaced
I0612 00:08:47.128] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0612 00:08:47.128] has:error validating data: kind not set
I0612 00:08:47.210] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:47.302] (BSuccessful
I0612 00:08:47.302] message:Name:         busybox0
I0612 00:08:47.303] Namespace:    namespace-1560298125-19389
I0612 00:08:47.303] Priority:     0
I0612 00:08:47.303] Node:         <none>
... skipping 153 lines ...
I0612 00:08:47.321] has:Object 'Kind' is missing
I0612 00:08:47.397] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:47.576] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0612 00:08:47.578] (BSuccessful
I0612 00:08:47.578] message:pod/busybox0 annotated
I0612 00:08:47.578] pod/busybox1 annotated
I0612 00:08:47.579] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0612 00:08:47.579] has:Object 'Kind' is missing
I0612 00:08:47.671] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:47.969] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0612 00:08:47.972] (BSuccessful
I0612 00:08:47.972] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0612 00:08:47.973] pod/busybox0 configured
I0612 00:08:47.973] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0612 00:08:47.973] pod/busybox1 configured
I0612 00:08:47.973] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0612 00:08:47.973] has:error validating data: kind not set
I0612 00:08:48.057] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:08:48.229] (Bdeployment.apps/nginx created
I0612 00:08:48.330] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0612 00:08:48.429] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0612 00:08:48.594] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0612 00:08:48.597] (BSuccessful
... skipping 41 lines ...
I0612 00:08:48.602] has:apps/v1
I0612 00:08:48.676] deployment.extensions "nginx" deleted
W0612 00:08:48.777] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0612 00:08:48.777] I0612 00:08:45.722252   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298125-18049", Name:"test1", UID:"55ec34aa-e7cb-4631-a0ec-f14ebd9c28dd", APIVersion:"apps/v1", ResourceVersion:"996", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-67487cf556 to 1
W0612 00:08:48.777] I0612 00:08:45.727454   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298125-18049", Name:"test1-67487cf556", UID:"2bd35ab3-b867-42b3-bea8-53dfa051a391", APIVersion:"apps/v1", ResourceVersion:"997", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-67487cf556-nj8z5
W0612 00:08:48.777] W0612 00:08:46.118404   48067 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0612 00:08:48.778] E0612 00:08:46.119847   51422 reflector.go:283] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to watch *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.778] W0612 00:08:46.208587   48067 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0612 00:08:48.778] E0612 00:08:46.209994   51422 reflector.go:283] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to watch *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.778] W0612 00:08:46.302607   48067 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0612 00:08:48.778] E0612 00:08:46.303975   51422 reflector.go:283] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to watch *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.778] W0612 00:08:46.405782   48067 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0612 00:08:48.779] E0612 00:08:46.407338   51422 reflector.go:283] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to watch *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.779] E0612 00:08:47.121235   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.779] E0612 00:08:47.211376   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.779] E0612 00:08:47.305420   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.779] E0612 00:08:47.408818   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.779] E0612 00:08:48.122668   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.780] E0612 00:08:48.212625   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.780] I0612 00:08:48.235583   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298125-19389", Name:"nginx", UID:"db638b34-81e3-48ff-bfa4-bbf260dc8f31", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-74bc9cdd45 to 3
W0612 00:08:48.780] I0612 00:08:48.240608   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298125-19389", Name:"nginx-74bc9cdd45", UID:"bc908343-66b5-4781-8786-e10b9f252d6f", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74bc9cdd45-xxhlm
W0612 00:08:48.780] I0612 00:08:48.244555   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298125-19389", Name:"nginx-74bc9cdd45", UID:"bc908343-66b5-4781-8786-e10b9f252d6f", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74bc9cdd45-hsl4v
W0612 00:08:48.781] I0612 00:08:48.249474   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298125-19389", Name:"nginx-74bc9cdd45", UID:"bc908343-66b5-4781-8786-e10b9f252d6f", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74bc9cdd45-5zrdp
W0612 00:08:48.781] E0612 00:08:48.306605   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.781] E0612 00:08:48.409962   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:48.781] kubectl convert is DEPRECATED and will be removed in a future version.
W0612 00:08:48.781] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0612 00:08:48.882] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:48.949] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:48.951] (BSuccessful
I0612 00:08:48.951] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0612 00:08:48.951] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0612 00:08:48.951] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0612 00:08:48.951] has:Object 'Kind' is missing
I0612 00:08:49.039] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:49.124] (BSuccessful
I0612 00:08:49.125] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0612 00:08:49.125] has:busybox0:busybox1:
I0612 00:08:49.127] Successful
I0612 00:08:49.127] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0612 00:08:49.127] has:Object 'Kind' is missing
I0612 00:08:49.222] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:49.335] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0612 00:08:49.436] E0612 00:08:49.125115   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:49.436] E0612 00:08:49.213766   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:49.436] E0612 00:08:49.314449   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:49.437] E0612 00:08:49.411836   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:08:49.537] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0612 00:08:49.537] (BSuccessful
I0612 00:08:49.537] message:pod/busybox0 labeled
I0612 00:08:49.537] pod/busybox1 labeled
I0612 00:08:49.538] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0612 00:08:49.538] has:Object 'Kind' is missing
I0612 00:08:49.538] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:49.628] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0612 00:08:49.730] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0612 00:08:49.730] (BSuccessful
I0612 00:08:49.730] message:pod/busybox0 patched
I0612 00:08:49.730] pod/busybox1 patched
I0612 00:08:49.730] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0612 00:08:49.731] has:Object 'Kind' is missing
I0612 00:08:49.822] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:50.007] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:08:50.010] (BSuccessful
I0612 00:08:50.011] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0612 00:08:50.011] pod "busybox0" force deleted
I0612 00:08:50.011] pod "busybox1" force deleted
I0612 00:08:50.011] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0612 00:08:50.012] has:Object 'Kind' is missing
I0612 00:08:50.100] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:08:50.264] (Breplicationcontroller/busybox0 created
I0612 00:08:50.274] replicationcontroller/busybox1 created
I0612 00:08:50.372] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:50.462] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:50.551] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0612 00:08:50.636] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0612 00:08:50.805] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0612 00:08:50.893] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0612 00:08:50.896] (BSuccessful
I0612 00:08:50.897] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0612 00:08:50.897] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0612 00:08:50.898] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:50.898] has:Object 'Kind' is missing
I0612 00:08:50.975] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0612 00:08:51.055] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0612 00:08:51.148] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:51.229] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0612 00:08:51.314] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0612 00:08:51.497] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0612 00:08:51.581] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0612 00:08:51.583] (BSuccessful
I0612 00:08:51.583] message:service/busybox0 exposed
I0612 00:08:51.583] service/busybox1 exposed
I0612 00:08:51.583] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:51.583] has:Object 'Kind' is missing
I0612 00:08:51.667] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:51.753] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0612 00:08:51.844] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0612 00:08:52.039] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0612 00:08:52.124] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0612 00:08:52.126] (BSuccessful
I0612 00:08:52.126] message:replicationcontroller/busybox0 scaled
I0612 00:08:52.126] replicationcontroller/busybox1 scaled
I0612 00:08:52.127] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:52.127] has:Object 'Kind' is missing
I0612 00:08:52.211] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:52.402] (Bgeneric-resources.sh:381: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:08:52.404] (BSuccessful
I0612 00:08:52.404] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0612 00:08:52.405] replicationcontroller "busybox0" force deleted
I0612 00:08:52.405] replicationcontroller "busybox1" force deleted
I0612 00:08:52.405] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:52.406] has:Object 'Kind' is missing
I0612 00:08:52.491] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:08:52.656] (Bdeployment.apps/nginx1-deployment created
I0612 00:08:52.662] deployment.apps/nginx0-deployment created
W0612 00:08:52.763] E0612 00:08:50.126659   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.764] E0612 00:08:50.215491   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.764] I0612 00:08:50.267719   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298125-19389", Name:"busybox0", UID:"14201d2f-0006-493d-acfe-379f6a44ab22", APIVersion:"v1", ResourceVersion:"1053", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-7mt7j
W0612 00:08:52.765] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0612 00:08:52.765] I0612 00:08:50.276093   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298125-19389", Name:"busybox1", UID:"23221185-0918-47a9-9f08-60f91375fc7a", APIVersion:"v1", ResourceVersion:"1055", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-rpxrc
W0612 00:08:52.765] E0612 00:08:50.316259   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.765] E0612 00:08:50.413446   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.765] E0612 00:08:51.127970   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.766] E0612 00:08:51.216774   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.766] E0612 00:08:51.317384   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.766] E0612 00:08:51.414476   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.766] I0612 00:08:51.938376   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298125-19389", Name:"busybox0", UID:"14201d2f-0006-493d-acfe-379f6a44ab22", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-96246
W0612 00:08:52.767] I0612 00:08:51.949705   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298125-19389", Name:"busybox1", UID:"23221185-0918-47a9-9f08-60f91375fc7a", APIVersion:"v1", ResourceVersion:"1077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-wvhxt
W0612 00:08:52.767] E0612 00:08:52.129176   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.767] E0612 00:08:52.218430   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.767] E0612 00:08:52.318592   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.767] E0612 00:08:52.415922   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:52.768] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0612 00:08:52.768] I0612 00:08:52.665008   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298125-19389", Name:"nginx1-deployment", UID:"dcdd16ee-fa7e-4214-9113-f63e4d689c6a", APIVersion:"apps/v1", ResourceVersion:"1095", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-746b5db49 to 2
W0612 00:08:52.768] I0612 00:08:52.667255   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298125-19389", Name:"nginx1-deployment-746b5db49", UID:"c6c2e1bb-2135-40ab-9f87-0e61ecdf24f2", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-746b5db49-4f8dm
W0612 00:08:52.768] I0612 00:08:52.669899   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298125-19389", Name:"nginx0-deployment", UID:"5abd36ea-cac4-4dcd-bf95-b10175be42aa", APIVersion:"apps/v1", ResourceVersion:"1097", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-5679bdc9fb to 2
W0612 00:08:52.769] I0612 00:08:52.673314   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298125-19389", Name:"nginx1-deployment-746b5db49", UID:"c6c2e1bb-2135-40ab-9f87-0e61ecdf24f2", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-746b5db49-46lsx
W0612 00:08:52.769] I0612 00:08:52.677931   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298125-19389", Name:"nginx0-deployment-5679bdc9fb", UID:"fa1c7209-56d5-41ac-ad6e-38360abf8a79", APIVersion:"apps/v1", ResourceVersion:"1099", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5679bdc9fb-gw528
W0612 00:08:52.770] I0612 00:08:52.683211   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298125-19389", Name:"nginx0-deployment-5679bdc9fb", UID:"fa1c7209-56d5-41ac-ad6e-38360abf8a79", APIVersion:"apps/v1", ResourceVersion:"1099", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5679bdc9fb-k7s62
I0612 00:08:52.870] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0612 00:08:52.872] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0612 00:08:53.074] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0612 00:08:53.076] (BSuccessful
I0612 00:08:53.077] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0612 00:08:53.077] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0612 00:08:53.077] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0612 00:08:53.077] has:Object 'Kind' is missing
I0612 00:08:53.172] deployment.apps/nginx1-deployment paused
I0612 00:08:53.181] deployment.apps/nginx0-deployment paused
W0612 00:08:53.282] E0612 00:08:53.130866   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:53.282] E0612 00:08:53.219806   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:53.320] E0612 00:08:53.320463   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:53.418] E0612 00:08:53.417737   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:08:53.519] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0612 00:08:53.519] (BSuccessful
I0612 00:08:53.520] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0612 00:08:53.520] has:Object 'Kind' is missing
I0612 00:08:53.521] deployment.apps/nginx1-deployment resumed
I0612 00:08:53.521] deployment.apps/nginx0-deployment resumed
... skipping 7 lines ...
I0612 00:08:53.617] 1         <none>
I0612 00:08:53.617] 
I0612 00:08:53.617] deployment.apps/nginx0-deployment 
I0612 00:08:53.617] REVISION  CHANGE-CAUSE
I0612 00:08:53.617] 1         <none>
I0612 00:08:53.617] 
I0612 00:08:53.618] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0612 00:08:53.618] has:nginx0-deployment
I0612 00:08:53.618] Successful
I0612 00:08:53.619] message:deployment.apps/nginx1-deployment 
I0612 00:08:53.619] REVISION  CHANGE-CAUSE
I0612 00:08:53.619] 1         <none>
I0612 00:08:53.619] 
I0612 00:08:53.619] deployment.apps/nginx0-deployment 
I0612 00:08:53.619] REVISION  CHANGE-CAUSE
I0612 00:08:53.619] 1         <none>
I0612 00:08:53.619] 
I0612 00:08:53.620] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0612 00:08:53.620] has:nginx1-deployment
I0612 00:08:53.621] Successful
I0612 00:08:53.621] message:deployment.apps/nginx1-deployment 
I0612 00:08:53.621] REVISION  CHANGE-CAUSE
I0612 00:08:53.621] 1         <none>
I0612 00:08:53.621] 
I0612 00:08:53.621] deployment.apps/nginx0-deployment 
I0612 00:08:53.622] REVISION  CHANGE-CAUSE
I0612 00:08:53.622] 1         <none>
I0612 00:08:53.622] 
I0612 00:08:53.622] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0612 00:08:53.623] has:Object 'Kind' is missing
I0612 00:08:53.701] deployment.apps "nginx1-deployment" force deleted
I0612 00:08:53.707] deployment.apps "nginx0-deployment" force deleted
W0612 00:08:53.808] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0612 00:08:53.808] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0612 00:08:54.133] E0612 00:08:54.132424   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:54.222] E0612 00:08:54.221399   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:54.322] E0612 00:08:54.321782   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:54.420] E0612 00:08:54.419409   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:08:54.802] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:08:54.968] (Breplicationcontroller/busybox0 created
I0612 00:08:54.976] replicationcontroller/busybox1 created
I0612 00:08:55.078] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0612 00:08:55.166] (BSuccessful
I0612 00:08:55.166] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0612 00:08:55.168] message:no rollbacker has been implemented for "ReplicationController"
I0612 00:08:55.168] no rollbacker has been implemented for "ReplicationController"
I0612 00:08:55.168] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:55.169] has:Object 'Kind' is missing
I0612 00:08:55.258] Successful
I0612 00:08:55.259] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:55.259] error: replicationcontrollers "busybox0" pausing is not supported
I0612 00:08:55.259] error: replicationcontrollers "busybox1" pausing is not supported
I0612 00:08:55.259] has:Object 'Kind' is missing
I0612 00:08:55.260] Successful
I0612 00:08:55.260] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:55.260] error: replicationcontrollers "busybox0" pausing is not supported
I0612 00:08:55.260] error: replicationcontrollers "busybox1" pausing is not supported
I0612 00:08:55.260] has:replicationcontrollers "busybox0" pausing is not supported
I0612 00:08:55.262] Successful
I0612 00:08:55.262] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:55.262] error: replicationcontrollers "busybox0" pausing is not supported
I0612 00:08:55.262] error: replicationcontrollers "busybox1" pausing is not supported
I0612 00:08:55.262] has:replicationcontrollers "busybox1" pausing is not supported
I0612 00:08:55.360] Successful
I0612 00:08:55.360] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:55.361] error: replicationcontrollers "busybox0" resuming is not supported
I0612 00:08:55.361] error: replicationcontrollers "busybox1" resuming is not supported
I0612 00:08:55.361] has:Object 'Kind' is missing
I0612 00:08:55.362] Successful
I0612 00:08:55.362] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:55.363] error: replicationcontrollers "busybox0" resuming is not supported
I0612 00:08:55.363] error: replicationcontrollers "busybox1" resuming is not supported
I0612 00:08:55.363] has:replicationcontrollers "busybox0" resuming is not supported
I0612 00:08:55.365] Successful
I0612 00:08:55.365] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0612 00:08:55.365] error: replicationcontrollers "busybox0" resuming is not supported
I0612 00:08:55.365] error: replicationcontrollers "busybox1" resuming is not supported
I0612 00:08:55.365] has:replicationcontrollers "busybox0" resuming is not supported
I0612 00:08:55.443] replicationcontroller "busybox0" force deleted
I0612 00:08:55.448] replicationcontroller "busybox1" force deleted
W0612 00:08:55.548] I0612 00:08:54.973951   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298125-19389", Name:"busybox0", UID:"05d8c5f7-dfbf-44d7-a582-4a4dc87ff75a", APIVersion:"v1", ResourceVersion:"1144", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-mmv6x
W0612 00:08:55.549] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0612 00:08:55.549] I0612 00:08:54.980472   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298125-19389", Name:"busybox1", UID:"d9d993f1-1c25-44a4-90d2-90c20c6a8321", APIVersion:"v1", ResourceVersion:"1146", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xh8zz
W0612 00:08:55.549] E0612 00:08:55.133870   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:55.549] E0612 00:08:55.222697   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:55.550] E0612 00:08:55.323023   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:55.550] E0612 00:08:55.421028   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:55.550] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0612 00:08:55.550] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0612 00:08:56.136] E0612 00:08:56.135535   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:56.224] E0612 00:08:56.224167   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:56.325] E0612 00:08:56.324543   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:56.423] E0612 00:08:56.422514   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:08:56.523] Recording: run_namespace_tests
I0612 00:08:56.524] Running command: run_namespace_tests
I0612 00:08:56.524] 
I0612 00:08:56.524] +++ Running case: test-cmd.run_namespace_tests 
I0612 00:08:56.524] +++ working dir: /go/src/k8s.io/kubernetes
I0612 00:08:56.525] +++ command: run_namespace_tests
I0612 00:08:56.525] +++ [0612 00:08:56] Testing kubectl(v1:namespaces)
I0612 00:08:56.568] namespace/my-namespace created
I0612 00:08:56.659] core.sh:1312: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0612 00:08:56.732] (Bnamespace "my-namespace" deleted
W0612 00:08:57.137] E0612 00:08:57.136929   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:57.226] E0612 00:08:57.225864   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:57.326] E0612 00:08:57.326219   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:57.424] E0612 00:08:57.424022   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:58.139] E0612 00:08:58.138391   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:58.228] E0612 00:08:58.227322   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:58.328] E0612 00:08:58.327633   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:58.426] E0612 00:08:58.425365   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:59.140] E0612 00:08:59.139834   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:59.229] E0612 00:08:59.228789   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:59.329] E0612 00:08:59.329004   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:08:59.427] E0612 00:08:59.426816   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:00.141] E0612 00:09:00.141159   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:00.231] E0612 00:09:00.230363   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:00.330] E0612 00:09:00.330323   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:00.429] E0612 00:09:00.429260   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:01.143] E0612 00:09:01.142692   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:01.232] E0612 00:09:01.231775   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:01.332] E0612 00:09:01.331740   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:01.431] E0612 00:09:01.430851   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:01.538] I0612 00:09:01.538269   51422 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0612 00:09:01.639] I0612 00:09:01.638631   51422 controller_utils.go:1036] Caches are synced for resource quota controller
I0612 00:09:01.830] namespace/my-namespace condition met
I0612 00:09:01.914] Successful
I0612 00:09:01.915] message:Error from server (NotFound): namespaces "my-namespace" not found
I0612 00:09:01.915] has: not found
I0612 00:09:01.988] namespace/my-namespace created
I0612 00:09:02.076] core.sh:1321: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0612 00:09:02.267] (BSuccessful
I0612 00:09:02.268] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0612 00:09:02.268] namespace "kube-node-lease" deleted
... skipping 30 lines ...
I0612 00:09:02.273] namespace "namespace-1560298085-23119" deleted
I0612 00:09:02.273] namespace "namespace-1560298086-6629" deleted
I0612 00:09:02.273] namespace "namespace-1560298088-20915" deleted
I0612 00:09:02.273] namespace "namespace-1560298089-30807" deleted
I0612 00:09:02.273] namespace "namespace-1560298125-18049" deleted
I0612 00:09:02.273] namespace "namespace-1560298125-19389" deleted
I0612 00:09:02.273] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0612 00:09:02.273] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0612 00:09:02.274] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0612 00:09:02.274] has:warning: deleting cluster-scoped resources
I0612 00:09:02.274] Successful
I0612 00:09:02.274] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0612 00:09:02.274] namespace "kube-node-lease" deleted
I0612 00:09:02.274] namespace "my-namespace" deleted
I0612 00:09:02.274] namespace "namespace-1560297990-3291" deleted
... skipping 28 lines ...
I0612 00:09:02.279] namespace "namespace-1560298085-23119" deleted
I0612 00:09:02.279] namespace "namespace-1560298086-6629" deleted
I0612 00:09:02.279] namespace "namespace-1560298088-20915" deleted
I0612 00:09:02.279] namespace "namespace-1560298089-30807" deleted
I0612 00:09:02.279] namespace "namespace-1560298125-18049" deleted
I0612 00:09:02.279] namespace "namespace-1560298125-19389" deleted
I0612 00:09:02.279] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0612 00:09:02.280] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0612 00:09:02.280] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0612 00:09:02.280] has:namespace "my-namespace" deleted
I0612 00:09:02.375] core.sh:1333: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0612 00:09:02.449] (Bnamespace/other created
I0612 00:09:02.542] core.sh:1337: Successful get namespaces/other {{.metadata.name}}: other
I0612 00:09:02.629] (Bcore.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:02.804] (Bpod/valid-pod created
I0612 00:09:02.902] core.sh:1345: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0612 00:09:02.984] (Bcore.sh:1347: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0612 00:09:03.061] (BSuccessful
I0612 00:09:03.062] message:error: a resource cannot be retrieved by name across all namespaces
I0612 00:09:03.062] has:a resource cannot be retrieved by name across all namespaces
I0612 00:09:03.147] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0612 00:09:03.222] (Bpod "valid-pod" force deleted
I0612 00:09:03.313] core.sh:1358: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:03.385] (Bnamespace "other" deleted
W0612 00:09:03.486] E0612 00:09:02.144364   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:03.486] E0612 00:09:02.232992   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:03.486] E0612 00:09:02.333176   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:03.486] E0612 00:09:02.432124   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:03.487] E0612 00:09:03.145727   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:03.487] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0612 00:09:03.487] E0612 00:09:03.234234   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:03.487] E0612 00:09:03.334521   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:03.487] E0612 00:09:03.433826   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:03.748] I0612 00:09:03.747826   51422 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0612 00:09:03.848] I0612 00:09:03.848176   51422 controller_utils.go:1036] Caches are synced for garbage collector controller
W0612 00:09:04.148] E0612 00:09:04.147620   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:04.236] E0612 00:09:04.235703   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:04.336] E0612 00:09:04.335936   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:04.436] E0612 00:09:04.435268   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:05.149] E0612 00:09:05.149027   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:05.237] E0612 00:09:05.237135   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:05.337] E0612 00:09:05.337312   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:05.437] E0612 00:09:05.436727   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:05.715] I0612 00:09:05.714292   51422 horizontal.go:340] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1560298125-19389
W0612 00:09:05.719] I0612 00:09:05.718918   51422 horizontal.go:340] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1560298125-19389
W0612 00:09:06.151] E0612 00:09:06.150535   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:06.239] E0612 00:09:06.238748   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:06.339] E0612 00:09:06.338909   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:06.439] E0612 00:09:06.438499   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:07.152] E0612 00:09:07.152009   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:07.241] E0612 00:09:07.241014   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:07.341] E0612 00:09:07.341215   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:07.442] E0612 00:09:07.442405   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:08.153] E0612 00:09:08.153070   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:08.243] E0612 00:09:08.242617   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:08.343] E0612 00:09:08.342721   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:08.444] E0612 00:09:08.443325   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:08.544] +++ exit code: 0
I0612 00:09:08.544] Recording: run_secrets_test
I0612 00:09:08.545] Running command: run_secrets_test
I0612 00:09:08.558] 
I0612 00:09:08.560] +++ Running case: test-cmd.run_secrets_test 
I0612 00:09:08.562] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 58 lines ...
I0612 00:09:10.514] (Bsecret "test-secret" deleted
I0612 00:09:10.591] secret/test-secret created
I0612 00:09:10.677] core.sh:761: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0612 00:09:10.758] (Bcore.sh:762: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0612 00:09:10.828] (Bsecret "test-secret" deleted
W0612 00:09:10.928] I0612 00:09:08.803067   69113 loader.go:359] Config loaded from file:  /tmp/tmp.7rAceWMmMn/.kube/config
W0612 00:09:10.929] E0612 00:09:09.154561   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:10.929] E0612 00:09:09.243770   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:10.930] E0612 00:09:09.343918   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:10.930] E0612 00:09:09.444655   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:10.930] E0612 00:09:10.155937   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:10.931] E0612 00:09:10.245567   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:10.931] E0612 00:09:10.345401   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:10.931] E0612 00:09:10.446329   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:11.032] secret/secret-string-data created
I0612 00:09:11.078] core.sh:784: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0612 00:09:11.156] (Bcore.sh:785: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0612 00:09:11.236] (Bcore.sh:786: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0612 00:09:11.307] (Bsecret "secret-string-data" deleted
I0612 00:09:11.399] core.sh:795: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:11.540] (Bsecret "test-secret" deleted
I0612 00:09:11.615] namespace "test-secrets" deleted
W0612 00:09:11.716] E0612 00:09:11.157110   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:11.716] E0612 00:09:11.246835   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:11.716] E0612 00:09:11.347162   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:11.717] E0612 00:09:11.447833   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:12.159] E0612 00:09:12.158779   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:12.249] E0612 00:09:12.248480   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:12.349] E0612 00:09:12.348643   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:12.449] E0612 00:09:12.449342   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:13.160] E0612 00:09:13.160258   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:13.250] E0612 00:09:13.249995   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:13.350] E0612 00:09:13.350092   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:13.451] E0612 00:09:13.450763   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:14.162] E0612 00:09:14.161695   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:14.251] E0612 00:09:14.251351   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:14.351] E0612 00:09:14.351362   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:14.452] E0612 00:09:14.452100   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:15.163] E0612 00:09:15.163367   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:15.253] E0612 00:09:15.252579   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:15.353] E0612 00:09:15.353069   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:15.454] E0612 00:09:15.453564   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:16.165] E0612 00:09:16.164948   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:16.254] E0612 00:09:16.254103   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:16.355] E0612 00:09:16.354622   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:16.455] E0612 00:09:16.454918   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:16.717] +++ exit code: 0
I0612 00:09:16.747] Recording: run_configmap_tests
I0612 00:09:16.747] Running command: run_configmap_tests
I0612 00:09:16.765] 
I0612 00:09:16.767] +++ Running case: test-cmd.run_configmap_tests 
I0612 00:09:16.770] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0612 00:09:17.860] configmap/test-binary-configmap created
I0612 00:09:17.950] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0612 00:09:18.035] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0612 00:09:18.269] (Bconfigmap "test-configmap" deleted
I0612 00:09:18.346] configmap "test-binary-configmap" deleted
I0612 00:09:18.431] namespace "test-configmaps" deleted
W0612 00:09:18.532] E0612 00:09:17.166535   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:18.532] E0612 00:09:17.255374   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:18.532] E0612 00:09:17.356415   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:18.533] E0612 00:09:17.455857   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:18.533] E0612 00:09:18.168181   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:18.533] E0612 00:09:18.256908   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:18.533] E0612 00:09:18.358494   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:18.533] E0612 00:09:18.457450   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:19.170] E0612 00:09:19.169837   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:19.259] E0612 00:09:19.258350   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:19.360] E0612 00:09:19.359915   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:19.459] E0612 00:09:19.458761   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:20.171] E0612 00:09:20.171177   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:20.261] E0612 00:09:20.260877   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:20.362] E0612 00:09:20.361491   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:20.461] E0612 00:09:20.460436   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:21.173] E0612 00:09:21.172792   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:21.263] E0612 00:09:21.262588   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:21.363] E0612 00:09:21.363000   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:21.462] E0612 00:09:21.462065   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:22.175] E0612 00:09:22.174464   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:22.264] E0612 00:09:22.264037   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:22.365] E0612 00:09:22.365079   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:22.464] E0612 00:09:22.464228   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:23.176] E0612 00:09:23.175840   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:23.266] E0612 00:09:23.265466   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:23.367] E0612 00:09:23.367054   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:23.466] E0612 00:09:23.466085   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:23.567] +++ exit code: 0
I0612 00:09:23.588] Recording: run_client_config_tests
I0612 00:09:23.588] Running command: run_client_config_tests
I0612 00:09:23.608] 
I0612 00:09:23.610] +++ Running case: test-cmd.run_client_config_tests 
I0612 00:09:23.612] +++ working dir: /go/src/k8s.io/kubernetes
I0612 00:09:23.614] +++ command: run_client_config_tests
I0612 00:09:23.627] +++ [0612 00:09:23] Creating namespace namespace-1560298163-14247
I0612 00:09:23.700] namespace/namespace-1560298163-14247 created
I0612 00:09:23.767] Context "test" modified.
I0612 00:09:23.775] +++ [0612 00:09:23] Testing client config
I0612 00:09:23.841] Successful
I0612 00:09:23.842] message:error: stat missing: no such file or directory
I0612 00:09:23.842] has:missing: no such file or directory
I0612 00:09:23.908] Successful
I0612 00:09:23.909] message:error: stat missing: no such file or directory
I0612 00:09:23.909] has:missing: no such file or directory
I0612 00:09:23.972] Successful
I0612 00:09:23.973] message:error: stat missing: no such file or directory
I0612 00:09:23.973] has:missing: no such file or directory
I0612 00:09:24.040] Successful
I0612 00:09:24.041] message:Error in configuration: context was not found for specified context: missing-context
I0612 00:09:24.041] has:context was not found for specified context: missing-context
I0612 00:09:24.107] Successful
I0612 00:09:24.107] message:error: no server found for cluster "missing-cluster"
I0612 00:09:24.108] has:no server found for cluster "missing-cluster"
I0612 00:09:24.174] Successful
I0612 00:09:24.174] message:error: auth info "missing-user" does not exist
I0612 00:09:24.174] has:auth info "missing-user" does not exist
W0612 00:09:24.275] E0612 00:09:24.177066   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:24.275] E0612 00:09:24.267165   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:24.370] E0612 00:09:24.369595   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:24.468] E0612 00:09:24.467498   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:24.568] Successful
I0612 00:09:24.569] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0612 00:09:24.569] has:Error loading config file
I0612 00:09:24.569] Successful
I0612 00:09:24.569] message:error: stat missing-config: no such file or directory
I0612 00:09:24.569] has:no such file or directory
I0612 00:09:24.569] +++ exit code: 0
I0612 00:09:24.569] Recording: run_service_accounts_tests
I0612 00:09:24.569] Running command: run_service_accounts_tests
I0612 00:09:24.569] 
I0612 00:09:24.569] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0612 00:09:24.759] (Bnamespace/test-service-accounts created
I0612 00:09:24.848] core.sh:820: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0612 00:09:24.915] (Bserviceaccount/test-service-account created
I0612 00:09:25.001] core.sh:826: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0612 00:09:25.076] (Bserviceaccount "test-service-account" deleted
I0612 00:09:25.156] namespace "test-service-accounts" deleted
W0612 00:09:25.257] E0612 00:09:25.178398   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:25.269] E0612 00:09:25.268701   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:25.371] E0612 00:09:25.371070   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:25.469] E0612 00:09:25.469008   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:26.180] E0612 00:09:26.179948   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:26.270] E0612 00:09:26.270060   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:26.373] E0612 00:09:26.372970   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:26.471] E0612 00:09:26.470417   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:27.182] E0612 00:09:27.181468   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:27.272] E0612 00:09:27.271665   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:27.375] E0612 00:09:27.374856   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:27.472] E0612 00:09:27.472240   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:28.183] E0612 00:09:28.183012   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:28.273] E0612 00:09:28.273150   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:28.377] E0612 00:09:28.376587   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:28.474] E0612 00:09:28.473499   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:29.185] E0612 00:09:29.184521   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:29.275] E0612 00:09:29.274571   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:29.379] E0612 00:09:29.379159   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:29.475] E0612 00:09:29.474775   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:30.186] E0612 00:09:30.185455   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:30.276] E0612 00:09:30.276076   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:30.377] +++ exit code: 0
I0612 00:09:30.377] Recording: run_job_tests
I0612 00:09:30.378] Running command: run_job_tests
I0612 00:09:30.378] 
I0612 00:09:30.378] +++ Running case: test-cmd.run_job_tests 
I0612 00:09:30.378] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0612 00:09:31.039] Labels:                        run=pi
I0612 00:09:31.039] Annotations:                   <none>
I0612 00:09:31.039] Schedule:                      59 23 31 2 *
I0612 00:09:31.039] Concurrency Policy:            Allow
I0612 00:09:31.039] Suspend:                       False
I0612 00:09:31.039] Successful Job History Limit:  3
I0612 00:09:31.039] Failed Job History Limit:      1
I0612 00:09:31.039] Starting Deadline Seconds:     <unset>
I0612 00:09:31.039] Selector:                      <unset>
I0612 00:09:31.039] Parallelism:                   <unset>
I0612 00:09:31.039] Completions:                   <unset>
I0612 00:09:31.040] Pod Template:
I0612 00:09:31.040]   Labels:  run=pi
... skipping 32 lines ...
I0612 00:09:31.536]                 run=pi
I0612 00:09:31.536] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0612 00:09:31.536] Controlled By:  CronJob/pi
I0612 00:09:31.536] Parallelism:    1
I0612 00:09:31.536] Completions:    1
I0612 00:09:31.536] Start Time:     Wed, 12 Jun 2019 00:09:31 +0000
I0612 00:09:31.536] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0612 00:09:31.536] Pod Template:
I0612 00:09:31.536]   Labels:  controller-uid=0accf46b-a1cb-4c5f-933a-a73ffa5fff3a
I0612 00:09:31.536]            job-name=test-job
I0612 00:09:31.537]            run=pi
I0612 00:09:31.537]   Containers:
I0612 00:09:31.537]    pi:
... skipping 15 lines ...
I0612 00:09:31.538]   Type    Reason            Age   From            Message
I0612 00:09:31.538]   ----    ------            ----  ----            -------
I0612 00:09:31.538]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-lnkjh
I0612 00:09:31.614] job.batch "test-job" deleted
I0612 00:09:31.693] cronjob.batch "pi" deleted
I0612 00:09:31.770] namespace "test-jobs" deleted
W0612 00:09:31.871] E0612 00:09:30.380401   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:31.871] E0612 00:09:30.476148   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:31.872] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0612 00:09:31.872] E0612 00:09:31.186893   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:31.872] E0612 00:09:31.277545   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:31.872] I0612 00:09:31.289847   51422 event.go:258] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"0accf46b-a1cb-4c5f-933a-a73ffa5fff3a", APIVersion:"batch/v1", ResourceVersion:"1437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-lnkjh
W0612 00:09:31.873] E0612 00:09:31.382032   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:31.873] E0612 00:09:31.477344   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:32.189] E0612 00:09:32.188533   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:32.279] E0612 00:09:32.279168   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:32.385] E0612 00:09:32.384807   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:32.479] E0612 00:09:32.478674   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:33.190] E0612 00:09:33.189944   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:33.281] E0612 00:09:33.280536   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:33.387] E0612 00:09:33.386403   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:33.480] E0612 00:09:33.480008   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:34.192] E0612 00:09:34.191578   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:34.282] E0612 00:09:34.281872   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:34.388] E0612 00:09:34.387902   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:34.482] E0612 00:09:34.481553   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:35.193] E0612 00:09:35.192931   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:35.283] E0612 00:09:35.283353   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:35.390] E0612 00:09:35.389393   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:35.483] E0612 00:09:35.482903   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:36.195] E0612 00:09:36.194479   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:36.285] E0612 00:09:36.284919   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:36.391] E0612 00:09:36.390853   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:36.485] E0612 00:09:36.484540   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:36.888] +++ exit code: 0
I0612 00:09:36.918] Recording: run_create_job_tests
I0612 00:09:36.918] Running command: run_create_job_tests
I0612 00:09:36.936] 
I0612 00:09:36.938] +++ Running case: test-cmd.run_create_job_tests 
I0612 00:09:36.941] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 6 lines ...
I0612 00:09:37.349] (Bjob.batch "test-job" deleted
I0612 00:09:37.435] job.batch/test-job-pi created
I0612 00:09:37.530] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I0612 00:09:37.608] (Bjob.batch "test-job-pi" deleted
I0612 00:09:37.697] cronjob.batch/test-pi created
W0612 00:09:37.798] I0612 00:09:37.170923   51422 event.go:258] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1560298176-8228", Name:"test-job", UID:"7cd7b681-370d-4169-9388-45fda9ad6314", APIVersion:"batch/v1", ResourceVersion:"1455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-jnp58
W0612 00:09:37.798] E0612 00:09:37.195811   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:37.798] E0612 00:09:37.286278   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:37.799] E0612 00:09:37.392297   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:37.799] I0612 00:09:37.431452   51422 event.go:258] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1560298176-8228", Name:"test-job-pi", UID:"3abbebf5-dce1-4d43-b3a8-f0ec9103078d", APIVersion:"batch/v1", ResourceVersion:"1462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-vbfx8
W0612 00:09:37.799] E0612 00:09:37.485871   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:37.800] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0612 00:09:37.800] I0612 00:09:37.799546   51422 event.go:258] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1560298176-8228", Name:"my-pi", UID:"cb2092e6-e5f3-40b8-8d50-9cfd1ae9b95a", APIVersion:"batch/v1", ResourceVersion:"1470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-ld9sf
I0612 00:09:37.900] job.batch/my-pi created
I0612 00:09:37.901] Successful
I0612 00:09:37.901] message:[perl -Mbignum=bpi -wle print bpi(10)]
I0612 00:09:37.901] has:perl -Mbignum=bpi -wle print bpi(10)
... skipping 12 lines ...
I0612 00:09:38.299] +++ [0612 00:09:38] Testing pod templates
I0612 00:09:38.393] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:38.569] (Bpodtemplate/nginx created
I0612 00:09:38.671] core.sh:1423: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0612 00:09:38.747] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0612 00:09:38.747] nginx   nginx        nginx    name=nginx
W0612 00:09:38.847] E0612 00:09:38.197026   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:38.848] E0612 00:09:38.287112   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:38.848] E0612 00:09:38.393522   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:38.848] E0612 00:09:38.487308   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:38.848] I0612 00:09:38.565614   48067 controller.go:606] quota admission added evaluator for: podtemplates
I0612 00:09:38.949] core.sh:1431: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0612 00:09:39.013] (Bpodtemplate "nginx" deleted
I0612 00:09:39.107] core.sh:1435: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:39.120] (B+++ exit code: 0
I0612 00:09:39.155] Recording: run_service_tests
... skipping 66 lines ...
I0612 00:09:40.056] Port:              <unset>  6379/TCP
I0612 00:09:40.057] TargetPort:        6379/TCP
I0612 00:09:40.057] Endpoints:         <none>
I0612 00:09:40.057] Session Affinity:  None
I0612 00:09:40.057] Events:            <none>
I0612 00:09:40.057] (B
W0612 00:09:40.157] E0612 00:09:39.198397   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:40.158] E0612 00:09:39.288458   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:40.158] E0612 00:09:39.394857   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:40.159] E0612 00:09:39.489626   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:40.201] E0612 00:09:40.199814   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:40.290] E0612 00:09:40.289843   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:40.390] Successful describe services:
I0612 00:09:40.391] Name:              kubernetes
I0612 00:09:40.391] Namespace:         default
I0612 00:09:40.391] Labels:            component=apiserver
I0612 00:09:40.392]                    provider=kubernetes
I0612 00:09:40.392] Annotations:       <none>
... skipping 238 lines ...
I0612 00:09:41.118]   selector:
I0612 00:09:41.119]     role: padawan
I0612 00:09:41.119]   sessionAffinity: None
I0612 00:09:41.119]   type: ClusterIP
I0612 00:09:41.119] status:
I0612 00:09:41.119]   loadBalancer: {}
W0612 00:09:41.219] E0612 00:09:40.396242   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:41.220] E0612 00:09:40.491082   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:41.220] error: you must specify resources by --filename when --local is set.
W0612 00:09:41.220] Example resource specifications include:
W0612 00:09:41.220]    '-f rsrc.yaml'
W0612 00:09:41.220]    '--filename=rsrc.json'
W0612 00:09:41.220] E0612 00:09:41.201245   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:41.291] E0612 00:09:41.291230   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:41.392] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0612 00:09:41.433] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0612 00:09:41.512] (Bservice "redis-master" deleted
I0612 00:09:41.603] core.sh:900: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0612 00:09:41.700] (Bcore.sh:904: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0612 00:09:41.868] (Bservice/redis-master created
... skipping 5 lines ...
I0612 00:09:42.589] core.sh:940: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0612 00:09:42.669] (Bservice "redis-master" deleted
I0612 00:09:42.751] service "service-v1-test" deleted
I0612 00:09:42.844] core.sh:948: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0612 00:09:42.941] (Bcore.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0612 00:09:43.099] (Bservice/redis-master created
W0612 00:09:43.200] E0612 00:09:41.397696   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:43.200] E0612 00:09:41.492441   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:43.200] E0612 00:09:42.202773   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:43.200] E0612 00:09:42.292709   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:43.201] E0612 00:09:42.399213   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:43.201] E0612 00:09:42.493699   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:43.204] E0612 00:09:43.204129   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:43.294] E0612 00:09:43.294222   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:43.395] service/redis-slave created
I0612 00:09:43.395] core.sh:957: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0612 00:09:43.458] (BSuccessful
I0612 00:09:43.459] message:NAME           RSRC
I0612 00:09:43.459] kubernetes     145
I0612 00:09:43.459] redis-master   1504
... skipping 84 lines ...
I0612 00:09:48.301] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0612 00:09:48.393] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0612 00:09:48.496] (Bdaemonset.extensions/bind rolled back
I0612 00:09:48.591] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0612 00:09:48.688] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0612 00:09:48.787] (BSuccessful
I0612 00:09:48.788] message:error: unable to find specified revision 1000000 in history
I0612 00:09:48.788] has:unable to find specified revision
W0612 00:09:48.888] E0612 00:09:43.400573   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.889] E0612 00:09:43.494833   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.889] E0612 00:09:44.205635   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.889] E0612 00:09:44.295629   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.889] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0612 00:09:48.890] I0612 00:09:44.399900   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"d0ae4859-ed67-444e-8502-eee41817c1f7", APIVersion:"apps/v1", ResourceVersion:"1519", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6b5b57544d to 2
W0612 00:09:48.890] E0612 00:09:44.401757   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.890] I0612 00:09:44.408566   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6b5b57544d", UID:"0229720d-e4f9-4e91-a580-2d812adca771", APIVersion:"apps/v1", ResourceVersion:"1520", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6b5b57544d-s8t66
W0612 00:09:48.891] I0612 00:09:44.413371   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6b5b57544d", UID:"0229720d-e4f9-4e91-a580-2d812adca771", APIVersion:"apps/v1", ResourceVersion:"1520", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6b5b57544d-kwt8c
W0612 00:09:48.891] E0612 00:09:44.497722   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.891] E0612 00:09:45.207025   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.892] E0612 00:09:45.297018   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.892] E0612 00:09:45.403261   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.892] E0612 00:09:45.499365   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.893] I0612 00:09:45.911011   48067 controller.go:606] quota admission added evaluator for: daemonsets.extensions
W0612 00:09:48.893] E0612 00:09:46.208403   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.893] E0612 00:09:46.298054   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.893] E0612 00:09:46.404876   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.894] E0612 00:09:46.501131   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.894] E0612 00:09:47.209636   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.894] E0612 00:09:47.299294   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.894] E0612 00:09:47.406593   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.894] E0612 00:09:47.502516   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.895] E0612 00:09:48.211064   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.895] E0612 00:09:48.300610   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.895] E0612 00:09:48.408133   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:48.895] E0612 00:09:48.503761   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:48.996] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0612 00:09:48.996] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0612 00:09:49.107] (Bdaemonset.extensions/bind rolled back
I0612 00:09:49.211] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0612 00:09:49.316] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0612 00:09:49.414] (Bapps.sh:99: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
... skipping 9 lines ...
I0612 00:09:49.655] namespace/namespace-1560298189-20902 created
I0612 00:09:49.724] Context "test" modified.
I0612 00:09:49.731] +++ [0612 00:09:49] Testing kubectl(v1:replicationcontrollers)
I0612 00:09:49.820] core.sh:1034: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:49.991] (Breplicationcontroller/frontend created
I0612 00:09:50.084] replicationcontroller "frontend" deleted
W0612 00:09:50.187] E0612 00:09:49.131990   51422 daemon_controller.go:302] namespace-1560298186-9524/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1560298186-9524", SelfLink:"/apis/apps/v1/namespaces/namespace-1560298186-9524/daemonsets/bind", UID:"fa26465c-f83e-44fc-972f-68c7aca7c6a5", ResourceVersion:"1590", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63695894987, loc:(*time.Location)(0x72e5ba0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1560298186-9524\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0011ce340), Fields:(*v1.Fields)(0xc002db3720)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0011ceb00), Fields:(*v1.Fields)(0xc002db37c0)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0011cebc0), Fields:(*v1.Fields)(0xc002db37e8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0011cee60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f97668), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d26f00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0011cee80), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002db3858)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001f976e0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0612 00:09:50.188] E0612 00:09:49.212507   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:50.188] E0612 00:09:49.302904   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:50.188] E0612 00:09:49.409269   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:50.188] E0612 00:09:49.504832   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:50.189] I0612 00:09:49.998852   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"20afb22c-67f8-42c8-bcd7-d9b64e7444d9", APIVersion:"v1", ResourceVersion:"1598", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-79m5m
W0612 00:09:50.189] I0612 00:09:50.004010   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"20afb22c-67f8-42c8-bcd7-d9b64e7444d9", APIVersion:"v1", ResourceVersion:"1598", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5b42l
W0612 00:09:50.189] I0612 00:09:50.005461   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"20afb22c-67f8-42c8-bcd7-d9b64e7444d9", APIVersion:"v1", ResourceVersion:"1598", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-79wqt
W0612 00:09:50.214] E0612 00:09:50.214365   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:50.304] E0612 00:09:50.304217   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:50.405] core.sh:1039: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:50.406] (Bcore.sh:1043: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:50.460] (Breplicationcontroller/frontend created
I0612 00:09:50.554] core.sh:1047: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0612 00:09:50.686] (Bcore.sh:1049: Successful describe rc frontend:
I0612 00:09:50.686] Name:         frontend
I0612 00:09:50.686] Namespace:    namespace-1560298189-20902
I0612 00:09:50.687] Selector:     app=guestbook,tier=frontend
I0612 00:09:50.687] Labels:       app=guestbook
I0612 00:09:50.687]               tier=frontend
I0612 00:09:50.687] Annotations:  <none>
I0612 00:09:50.687] Replicas:     3 current / 3 desired
I0612 00:09:50.687] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0612 00:09:50.687] Pod Template:
I0612 00:09:50.687]   Labels:  app=guestbook
I0612 00:09:50.687]            tier=frontend
I0612 00:09:50.687]   Containers:
I0612 00:09:50.687]    php-redis:
I0612 00:09:50.687]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0612 00:09:50.788] Namespace:    namespace-1560298189-20902
I0612 00:09:50.789] Selector:     app=guestbook,tier=frontend
I0612 00:09:50.789] Labels:       app=guestbook
I0612 00:09:50.789]               tier=frontend
I0612 00:09:50.789] Annotations:  <none>
I0612 00:09:50.789] Replicas:     3 current / 3 desired
I0612 00:09:50.790] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0612 00:09:50.790] Pod Template:
I0612 00:09:50.790]   Labels:  app=guestbook
I0612 00:09:50.790]            tier=frontend
I0612 00:09:50.790]   Containers:
I0612 00:09:50.791]    php-redis:
I0612 00:09:50.791]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0612 00:09:50.887] Namespace:    namespace-1560298189-20902
I0612 00:09:50.887] Selector:     app=guestbook,tier=frontend
I0612 00:09:50.887] Labels:       app=guestbook
I0612 00:09:50.887]               tier=frontend
I0612 00:09:50.887] Annotations:  <none>
I0612 00:09:50.887] Replicas:     3 current / 3 desired
I0612 00:09:50.887] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0612 00:09:50.887] Pod Template:
I0612 00:09:50.887]   Labels:  app=guestbook
I0612 00:09:50.887]            tier=frontend
I0612 00:09:50.887]   Containers:
I0612 00:09:50.888]    php-redis:
I0612 00:09:50.888]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0612 00:09:50.988] Namespace:    namespace-1560298189-20902
I0612 00:09:50.989] Selector:     app=guestbook,tier=frontend
I0612 00:09:50.989] Labels:       app=guestbook
I0612 00:09:50.989]               tier=frontend
I0612 00:09:50.989] Annotations:  <none>
I0612 00:09:50.989] Replicas:     3 current / 3 desired
I0612 00:09:50.989] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0612 00:09:50.989] Pod Template:
I0612 00:09:50.989]   Labels:  app=guestbook
I0612 00:09:50.989]            tier=frontend
I0612 00:09:50.989]   Containers:
I0612 00:09:50.989]    php-redis:
I0612 00:09:50.989]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0612 00:09:50.990]   Type    Reason            Age   From                    Message
I0612 00:09:50.990]   ----    ------            ----  ----                    -------
I0612 00:09:50.990]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-rhlmk
I0612 00:09:50.991]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-557hj
I0612 00:09:50.991]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-mqv8k
I0612 00:09:50.991] (B
W0612 00:09:51.091] E0612 00:09:50.410643   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:51.092] I0612 00:09:50.464256   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"6deded9a-a261-441b-8bb7-2146c47321eb", APIVersion:"v1", ResourceVersion:"1614", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rhlmk
W0612 00:09:51.092] I0612 00:09:50.470323   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"6deded9a-a261-441b-8bb7-2146c47321eb", APIVersion:"v1", ResourceVersion:"1614", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-557hj
W0612 00:09:51.092] I0612 00:09:50.471261   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"6deded9a-a261-441b-8bb7-2146c47321eb", APIVersion:"v1", ResourceVersion:"1614", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mqv8k
W0612 00:09:51.093] E0612 00:09:50.506039   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:51.193] Successful describe rc:
I0612 00:09:51.193] Name:         frontend
I0612 00:09:51.193] Namespace:    namespace-1560298189-20902
I0612 00:09:51.193] Selector:     app=guestbook,tier=frontend
I0612 00:09:51.193] Labels:       app=guestbook
I0612 00:09:51.193]               tier=frontend
I0612 00:09:51.194] Annotations:  <none>
I0612 00:09:51.194] Replicas:     3 current / 3 desired
I0612 00:09:51.194] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0612 00:09:51.194] Pod Template:
I0612 00:09:51.194]   Labels:  app=guestbook
I0612 00:09:51.194]            tier=frontend
I0612 00:09:51.194]   Containers:
I0612 00:09:51.194]    php-redis:
I0612 00:09:51.195]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0612 00:09:51.215] Namespace:    namespace-1560298189-20902
I0612 00:09:51.215] Selector:     app=guestbook,tier=frontend
I0612 00:09:51.215] Labels:       app=guestbook
I0612 00:09:51.215]               tier=frontend
I0612 00:09:51.215] Annotations:  <none>
I0612 00:09:51.215] Replicas:     3 current / 3 desired
I0612 00:09:51.215] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0612 00:09:51.215] Pod Template:
I0612 00:09:51.215]   Labels:  app=guestbook
I0612 00:09:51.215]            tier=frontend
I0612 00:09:51.215]   Containers:
I0612 00:09:51.215]    php-redis:
I0612 00:09:51.215]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0612 00:09:51.311] Namespace:    namespace-1560298189-20902
I0612 00:09:51.311] Selector:     app=guestbook,tier=frontend
I0612 00:09:51.311] Labels:       app=guestbook
I0612 00:09:51.311]               tier=frontend
I0612 00:09:51.312] Annotations:  <none>
I0612 00:09:51.312] Replicas:     3 current / 3 desired
I0612 00:09:51.312] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0612 00:09:51.312] Pod Template:
I0612 00:09:51.312]   Labels:  app=guestbook
I0612 00:09:51.312]            tier=frontend
I0612 00:09:51.312]   Containers:
I0612 00:09:51.313]    php-redis:
I0612 00:09:51.313]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0612 00:09:51.418] Namespace:    namespace-1560298189-20902
I0612 00:09:51.418] Selector:     app=guestbook,tier=frontend
I0612 00:09:51.418] Labels:       app=guestbook
I0612 00:09:51.419]               tier=frontend
I0612 00:09:51.419] Annotations:  <none>
I0612 00:09:51.419] Replicas:     3 current / 3 desired
I0612 00:09:51.419] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0612 00:09:51.420] Pod Template:
I0612 00:09:51.420]   Labels:  app=guestbook
I0612 00:09:51.420]            tier=frontend
I0612 00:09:51.420]   Containers:
I0612 00:09:51.420]    php-redis:
I0612 00:09:51.421]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 21 lines ...
I0612 00:09:52.112] (Breplicationcontroller/frontend scaled
I0612 00:09:52.202] core.sh:1087: Successful get rc frontend {{.spec.replicas}}: 3
I0612 00:09:52.286] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 3
I0612 00:09:52.366] (Breplicationcontroller/frontend scaled
I0612 00:09:52.455] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0612 00:09:52.533] (Breplicationcontroller "frontend" deleted
W0612 00:09:52.634] E0612 00:09:51.215636   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:52.634] E0612 00:09:51.305331   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:52.635] E0612 00:09:51.411905   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:52.635] E0612 00:09:51.507138   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:52.636] I0612 00:09:51.606476   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"6deded9a-a261-441b-8bb7-2146c47321eb", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-mqv8k
W0612 00:09:52.636] error: Expected replicas to be 3, was 2
W0612 00:09:52.636] I0612 00:09:52.117810   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"6deded9a-a261-441b-8bb7-2146c47321eb", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7tk6d
W0612 00:09:52.636] E0612 00:09:52.217321   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:52.637] E0612 00:09:52.306952   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:52.637] I0612 00:09:52.373289   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"6deded9a-a261-441b-8bb7-2146c47321eb", APIVersion:"v1", ResourceVersion:"1635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7tk6d
W0612 00:09:52.638] E0612 00:09:52.413307   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:52.638] E0612 00:09:52.508590   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:52.714] I0612 00:09:52.713899   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"redis-master", UID:"92affa27-bb8e-4b08-8d24-64f652ff393e", APIVersion:"v1", ResourceVersion:"1649", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-jzprk
I0612 00:09:52.815] replicationcontroller/redis-master created
I0612 00:09:52.893] replicationcontroller/redis-slave created
I0612 00:09:52.991] replicationcontroller/redis-master scaled
I0612 00:09:53.012] replicationcontroller/redis-slave scaled
I0612 00:09:53.107] core.sh:1105: Successful get rc redis-master {{.spec.replicas}}: 4
... skipping 4 lines ...
W0612 00:09:53.371] I0612 00:09:52.903564   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"redis-slave", UID:"1f5dfae0-1145-4e0a-927c-9ffcb1239ed7", APIVersion:"v1", ResourceVersion:"1654", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-swldp
W0612 00:09:53.371] I0612 00:09:52.995006   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"redis-master", UID:"92affa27-bb8e-4b08-8d24-64f652ff393e", APIVersion:"v1", ResourceVersion:"1661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-nbqgp
W0612 00:09:53.371] I0612 00:09:53.002266   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"redis-master", UID:"92affa27-bb8e-4b08-8d24-64f652ff393e", APIVersion:"v1", ResourceVersion:"1661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-b25cp
W0612 00:09:53.371] I0612 00:09:53.003346   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"redis-master", UID:"92affa27-bb8e-4b08-8d24-64f652ff393e", APIVersion:"v1", ResourceVersion:"1661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-8jjnn
W0612 00:09:53.372] I0612 00:09:53.009353   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"redis-slave", UID:"1f5dfae0-1145-4e0a-927c-9ffcb1239ed7", APIVersion:"v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-6d595
W0612 00:09:53.372] I0612 00:09:53.014727   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"redis-slave", UID:"1f5dfae0-1145-4e0a-927c-9ffcb1239ed7", APIVersion:"v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-cljdt
W0612 00:09:53.372] E0612 00:09:53.218702   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:53.372] E0612 00:09:53.308401   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:53.415] E0612 00:09:53.414987   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:53.455] I0612 00:09:53.454519   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment", UID:"c13cc850-f61c-4dfd-a83b-f9cdcac072a8", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-995fc88bd to 3
W0612 00:09:53.461] I0612 00:09:53.461030   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-995fc88bd", UID:"3d5693ca-675d-4f70-afb8-3634d4f32337", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-h866b
W0612 00:09:53.465] I0612 00:09:53.465381   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-995fc88bd", UID:"3d5693ca-675d-4f70-afb8-3634d4f32337", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-n44xx
W0612 00:09:53.466] I0612 00:09:53.465435   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-995fc88bd", UID:"3d5693ca-675d-4f70-afb8-3634d4f32337", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-j4dvk
W0612 00:09:53.510] E0612 00:09:53.509639   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:53.561] I0612 00:09:53.560970   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment", UID:"c13cc850-f61c-4dfd-a83b-f9cdcac072a8", APIVersion:"apps/v1", ResourceVersion:"1709", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-995fc88bd to 1
W0612 00:09:53.565] I0612 00:09:53.565058   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-995fc88bd", UID:"3d5693ca-675d-4f70-afb8-3634d4f32337", APIVersion:"apps/v1", ResourceVersion:"1710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-995fc88bd-n44xx
W0612 00:09:53.567] I0612 00:09:53.566970   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-995fc88bd", UID:"3d5693ca-675d-4f70-afb8-3634d4f32337", APIVersion:"apps/v1", ResourceVersion:"1710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-995fc88bd-h866b
I0612 00:09:53.667] deployment.apps/nginx-deployment created
I0612 00:09:53.668] deployment.extensions/nginx-deployment scaled
I0612 00:09:53.668] core.sh:1115: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
... skipping 8 lines ...
I0612 00:09:54.063] service "expose-test-deployment" deleted
I0612 00:09:54.160] Successful
I0612 00:09:54.160] message:service/expose-test-deployment exposed
I0612 00:09:54.161] has:service/expose-test-deployment exposed
I0612 00:09:54.236] service "expose-test-deployment" deleted
I0612 00:09:54.330] Successful
I0612 00:09:54.330] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0612 00:09:54.330] See 'kubectl expose -h' for help and examples
I0612 00:09:54.330] has:invalid deployment: no selectors
I0612 00:09:54.414] Successful
I0612 00:09:54.414] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0612 00:09:54.414] See 'kubectl expose -h' for help and examples
I0612 00:09:54.414] has:invalid deployment: no selectors
W0612 00:09:54.515] E0612 00:09:54.220072   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:54.515] E0612 00:09:54.311544   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:54.515] E0612 00:09:54.416417   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:54.516] E0612 00:09:54.511173   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:54.586] I0612 00:09:54.585877   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment", UID:"3c681fca-db9c-4d63-9b9e-ae94ceec2c36", APIVersion:"apps/v1", ResourceVersion:"1745", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-995fc88bd to 3
W0612 00:09:54.592] I0612 00:09:54.592173   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-995fc88bd", UID:"74b317b5-6ed6-4cd7-8d51-7b552033869a", APIVersion:"apps/v1", ResourceVersion:"1746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-8pn9z
W0612 00:09:54.596] I0612 00:09:54.595527   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-995fc88bd", UID:"74b317b5-6ed6-4cd7-8d51-7b552033869a", APIVersion:"apps/v1", ResourceVersion:"1746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-gl545
W0612 00:09:54.596] I0612 00:09:54.596405   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-995fc88bd", UID:"74b317b5-6ed6-4cd7-8d51-7b552033869a", APIVersion:"apps/v1", ResourceVersion:"1746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-hrvc2
I0612 00:09:54.697] deployment.apps/nginx-deployment created
I0612 00:09:54.698] core.sh:1150: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
... skipping 18 lines ...
I0612 00:09:56.413] service "frontend" deleted
I0612 00:09:56.421] service "frontend-2" deleted
I0612 00:09:56.428] service "frontend-3" deleted
I0612 00:09:56.434] service "frontend-4" deleted
I0612 00:09:56.440] service "frontend-5" deleted
I0612 00:09:56.532] Successful
I0612 00:09:56.532] message:error: cannot expose a Node
I0612 00:09:56.533] has:cannot expose
I0612 00:09:56.616] Successful
I0612 00:09:56.617] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0612 00:09:56.618] has:metadata.name: Invalid value
I0612 00:09:56.707] Successful
I0612 00:09:56.708] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I0612 00:09:56.708] has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I0612 00:09:56.786] service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
W0612 00:09:56.887] I0612 00:09:55.113438   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"b7264e09-c820-4071-b51b-c4ae3750414f", APIVersion:"v1", ResourceVersion:"1774", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ptq6x
W0612 00:09:56.888] I0612 00:09:55.117695   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"b7264e09-c820-4071-b51b-c4ae3750414f", APIVersion:"v1", ResourceVersion:"1774", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gt4ct
W0612 00:09:56.888] I0612 00:09:55.118082   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"b7264e09-c820-4071-b51b-c4ae3750414f", APIVersion:"v1", ResourceVersion:"1774", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kjjfw
W0612 00:09:56.889] E0612 00:09:55.221378   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:56.889] E0612 00:09:55.312673   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:56.890] E0612 00:09:55.418062   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:56.890] E0612 00:09:55.512706   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:56.890] E0612 00:09:56.222797   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:56.890] E0612 00:09:56.313878   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:56.891] E0612 00:09:56.418968   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:56.891] E0612 00:09:56.514100   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:56.992] Successful
I0612 00:09:56.992] message:service/etcd-server exposed
I0612 00:09:56.992] has:etcd-server exposed
I0612 00:09:56.993] core.sh:1212: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
I0612 00:09:57.055] (Bcore.sh:1213: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
I0612 00:09:57.131] (Bservice "etcd-server" deleted
I0612 00:09:57.228] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0612 00:09:57.305] (Breplicationcontroller "frontend" deleted
I0612 00:09:57.404] core.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:57.492] (Bcore.sh:1227: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:57.663] (Breplicationcontroller/frontend created
W0612 00:09:57.763] E0612 00:09:57.224350   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:57.764] E0612 00:09:57.314790   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:57.764] E0612 00:09:57.420563   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:57.764] E0612 00:09:57.515936   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:57.764] I0612 00:09:57.674244   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"56ef9ac6-a297-4a67-bc47-4311f236fbc7", APIVersion:"v1", ResourceVersion:"1837", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-w27zj
W0612 00:09:57.765] I0612 00:09:57.679225   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"56ef9ac6-a297-4a67-bc47-4311f236fbc7", APIVersion:"v1", ResourceVersion:"1837", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gdqjv
W0612 00:09:57.765] I0612 00:09:57.679743   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"56ef9ac6-a297-4a67-bc47-4311f236fbc7", APIVersion:"v1", ResourceVersion:"1837", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-db97r
W0612 00:09:57.852] I0612 00:09:57.852302   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"redis-slave", UID:"98f12b50-4672-4a57-8ce3-b68ad1101c94", APIVersion:"v1", ResourceVersion:"1846", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-fs69m
W0612 00:09:57.857] I0612 00:09:57.856968   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"redis-slave", UID:"98f12b50-4672-4a57-8ce3-b68ad1101c94", APIVersion:"v1", ResourceVersion:"1846", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-xr429
I0612 00:09:57.958] replicationcontroller/redis-slave created
... skipping 8 lines ...
I0612 00:09:58.659] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0612 00:09:58.754] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0612 00:09:58.834] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0612 00:09:58.920] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0612 00:09:59.016] core.sh:1254: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0612 00:09:59.095] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0612 00:09:59.195] E0612 00:09:58.227255   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:59.196] E0612 00:09:58.315995   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:59.196] E0612 00:09:58.422127   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:59.196] I0612 00:09:58.484362   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"3f61d0a3-a6aa-4150-b712-187024370e5f", APIVersion:"v1", ResourceVersion:"1865", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pwlpc
W0612 00:09:59.196] I0612 00:09:58.489402   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"3f61d0a3-a6aa-4150-b712-187024370e5f", APIVersion:"v1", ResourceVersion:"1865", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dsqnh
W0612 00:09:59.197] I0612 00:09:58.489788   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560298189-20902", Name:"frontend", UID:"3f61d0a3-a6aa-4150-b712-187024370e5f", APIVersion:"v1", ResourceVersion:"1865", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2z6wb
W0612 00:09:59.197] E0612 00:09:58.516963   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:59.197] Error: required flag(s) "max" not set
W0612 00:09:59.197] 
W0612 00:09:59.197] 
W0612 00:09:59.197] Examples:
W0612 00:09:59.197]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0612 00:09:59.197]   kubectl autoscale deployment foo --min=2 --max=10
W0612 00:09:59.197]   
... skipping 19 lines ...
W0612 00:09:59.202] Usage:
W0612 00:09:59.202]   kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [options]
W0612 00:09:59.202] 
W0612 00:09:59.202] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0612 00:09:59.202] 
W0612 00:09:59.203] required flag(s) "max" not set
W0612 00:09:59.229] E0612 00:09:59.228650   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:59.318] E0612 00:09:59.317553   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:59.418] replicationcontroller "frontend" deleted
I0612 00:09:59.419] core.sh:1263: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:09:59.420] (BapiVersion: apps/v1
I0612 00:09:59.420] kind: Deployment
I0612 00:09:59.420] metadata:
I0612 00:09:59.420]   creationTimestamp: null
... skipping 24 lines ...
I0612 00:09:59.422]           limits:
I0612 00:09:59.422]             cpu: 300m
I0612 00:09:59.422]           requests:
I0612 00:09:59.422]             cpu: 300m
I0612 00:09:59.423]       terminationGracePeriodSeconds: 0
I0612 00:09:59.423] status: {}
W0612 00:09:59.523] E0612 00:09:59.423317   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:09:59.523] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
W0612 00:09:59.523] E0612 00:09:59.518267   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:09:59.681] deployment.apps/nginx-deployment-resources created
W0612 00:09:59.782] I0612 00:09:59.687325   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources", UID:"5754956c-bb2f-4f8d-bfb5-cfe6e1d66715", APIVersion:"apps/v1", ResourceVersion:"1886", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-8495f5b88c to 3
W0612 00:09:59.783] I0612 00:09:59.691956   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources-8495f5b88c", UID:"b4c21d31-d798-4118-9bf2-724c0966c186", APIVersion:"apps/v1", ResourceVersion:"1887", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8495f5b88c-zbzk2
W0612 00:09:59.783] I0612 00:09:59.695252   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources-8495f5b88c", UID:"b4c21d31-d798-4118-9bf2-724c0966c186", APIVersion:"apps/v1", ResourceVersion:"1887", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8495f5b88c-2khxg
W0612 00:09:59.783] I0612 00:09:59.696747   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources-8495f5b88c", UID:"b4c21d31-d798-4118-9bf2-724c0966c186", APIVersion:"apps/v1", ResourceVersion:"1887", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8495f5b88c-ch4mw
I0612 00:09:59.883] core.sh:1269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0612 00:10:00.079] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0612 00:10:00.175] core.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0612 00:10:00.258] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0612 00:10:00.435] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0612 00:10:00.536] I0612 00:10:00.085358   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources", UID:"5754956c-bb2f-4f8d-bfb5-cfe6e1d66715", APIVersion:"apps/v1", ResourceVersion:"1900", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-68fb698987 to 1
W0612 00:10:00.537] I0612 00:10:00.090121   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources-68fb698987", UID:"e73a5bdd-6fa4-4c64-b791-1c87f901c283", APIVersion:"apps/v1", ResourceVersion:"1901", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-68fb698987-p77sr
W0612 00:10:00.537] E0612 00:10:00.230361   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:00.537] E0612 00:10:00.319371   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:00.537] error: unable to find container named redis
W0612 00:10:00.537] E0612 00:10:00.424709   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:00.538] I0612 00:10:00.473733   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources", UID:"5754956c-bb2f-4f8d-bfb5-cfe6e1d66715", APIVersion:"apps/v1", ResourceVersion:"1909", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-8495f5b88c to 2
W0612 00:10:00.538] I0612 00:10:00.486539   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources-8495f5b88c", UID:"b4c21d31-d798-4118-9bf2-724c0966c186", APIVersion:"apps/v1", ResourceVersion:"1913", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-8495f5b88c-2khxg
W0612 00:10:00.538] I0612 00:10:00.498181   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources", UID:"5754956c-bb2f-4f8d-bfb5-cfe6e1d66715", APIVersion:"apps/v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-cb4546d8f to 1
W0612 00:10:00.538] I0612 00:10:00.508283   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources-cb4546d8f", UID:"9950601e-e8eb-4681-a132-d6039b4b82c3", APIVersion:"apps/v1", ResourceVersion:"1919", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-cb4546d8f-wdtrr
W0612 00:10:00.539] E0612 00:10:00.519231   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:10:00.639] core.sh:1280: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0612 00:10:00.664] (Bcore.sh:1281: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0612 00:10:00.758] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0612 00:10:00.859] I0612 00:10:00.783828   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources", UID:"5754956c-bb2f-4f8d-bfb5-cfe6e1d66715", APIVersion:"apps/v1", ResourceVersion:"1930", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-8495f5b88c to 1
W0612 00:10:00.859] I0612 00:10:00.792718   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources-8495f5b88c", UID:"b4c21d31-d798-4118-9bf2-724c0966c186", APIVersion:"apps/v1", ResourceVersion:"1934", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-8495f5b88c-zbzk2
W0612 00:10:00.859] I0612 00:10:00.802535   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298189-20902", Name:"nginx-deployment-resources", UID:"5754956c-bb2f-4f8d-bfb5-cfe6e1d66715", APIVersion:"apps/v1", ResourceVersion:"1933", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-c97dcbbb4 to 1
... skipping 207 lines ...
I0612 00:10:01.151]     status: "True"
I0612 00:10:01.151]     type: Progressing
I0612 00:10:01.151]   observedGeneration: 4
I0612 00:10:01.151]   replicas: 4
I0612 00:10:01.151]   unavailableReplicas: 4
I0612 00:10:01.151]   updatedReplicas: 1
W0612 00:10:01.251] error: you must specify resources by --filename when --local is set.
W0612 00:10:01.252] Example resource specifications include:
W0612 00:10:01.252]    '-f rsrc.yaml'
W0612 00:10:01.252]    '--filename=rsrc.json'
W0612 00:10:01.252] E0612 00:10:01.231822   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:01.321] E0612 00:10:01.320945   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:10:01.422] core.sh:1290: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0612 00:10:01.422] (Bcore.sh:1291: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0612 00:10:01.475] (Bcore.sh:1292: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
I0612 00:10:01.551] (Bdeployment.extensions "nginx-deployment-resources" deleted
I0612 00:10:01.576] +++ exit code: 0
I0612 00:10:01.612] Recording: run_deployment_tests
... skipping 26 lines ...
I0612 00:10:02.646] Successful
I0612 00:10:02.646] message:extensions/v1beta1
I0612 00:10:02.646] has:extensions/v1beta1
I0612 00:10:02.727] Successful
I0612 00:10:02.727] message:apps/v1
I0612 00:10:02.727] has:apps/v1
W0612 00:10:02.828] E0612 00:10:01.426509   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:02.828] E0612 00:10:01.521329   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:02.828] I0612 00:10:01.883474   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298201-28817", Name:"test-nginx-extensions", UID:"b149d381-aede-431d-9c52-bbb2c3213b5a", APIVersion:"apps/v1", ResourceVersion:"1965", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-6464c67b69 to 1
W0612 00:10:02.829] I0612 00:10:01.891887   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"test-nginx-extensions-6464c67b69", UID:"3c9b4198-15b2-49c7-9f74-52c8766b92fb", APIVersion:"apps/v1", ResourceVersion:"1966", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-6464c67b69-8kbf9
W0612 00:10:02.829] E0612 00:10:02.233501   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:02.829] E0612 00:10:02.322006   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:02.829] I0612 00:10:02.392287   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298201-28817", Name:"test-nginx-apps", UID:"f2833dfc-dd9d-407c-af84-c56fccb94be2", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-84cc8d94ff to 1
W0612 00:10:02.830] I0612 00:10:02.400266   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"test-nginx-apps-84cc8d94ff", UID:"a1d200e0-919a-493a-b70f-caaec9f91ec8", APIVersion:"apps/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-apps-84cc8d94ff-wk5z2
W0612 00:10:02.830] E0612 00:10:02.427487   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:02.830] E0612 00:10:02.522898   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:10:02.930] Successful describe rs:
I0612 00:10:02.931] Name:           test-nginx-apps-84cc8d94ff
I0612 00:10:02.931] Namespace:      namespace-1560298201-28817
I0612 00:10:02.931] Selector:       app=test-nginx-apps,pod-template-hash=84cc8d94ff
I0612 00:10:02.931] Labels:         app=test-nginx-apps
I0612 00:10:02.931]                 pod-template-hash=84cc8d94ff
I0612 00:10:02.931] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0612 00:10:02.931]                 deployment.kubernetes.io/max-replicas: 2
I0612 00:10:02.932]                 deployment.kubernetes.io/revision: 1
I0612 00:10:02.932] Controlled By:  Deployment/test-nginx-apps
I0612 00:10:02.932] Replicas:       1 current / 1 desired
I0612 00:10:02.932] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0612 00:10:02.932] Pod Template:
I0612 00:10:02.932]   Labels:  app=test-nginx-apps
I0612 00:10:02.932]            pod-template-hash=84cc8d94ff
I0612 00:10:02.932]   Containers:
I0612 00:10:02.933]    nginx:
I0612 00:10:02.933]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 47 lines ...
I0612 00:10:04.506] (Bapps.sh:254: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:10:04.580] (Bdeployment.apps/nginx-deployment created
I0612 00:10:04.680] apps.sh:258: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I0612 00:10:04.757] (Bdeployment.extensions "nginx-deployment" deleted
W0612 00:10:04.858] I0612 00:10:03.214284   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298201-28817", Name:"nginx-with-command", UID:"bfeeff77-a455-4ef6-b289-ad7d4533b09f", APIVersion:"apps/v1", ResourceVersion:"1994", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-6bdd884 to 1
W0612 00:10:04.859] I0612 00:10:03.217266   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-with-command-6bdd884", UID:"f69bcd7e-a588-4dc0-aa61-73f21b9defce", APIVersion:"apps/v1", ResourceVersion:"1995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-6bdd884-6kpmw
W0612 00:10:04.860] E0612 00:10:03.235437   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:04.860] E0612 00:10:03.323482   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:04.860] E0612 00:10:03.428758   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:04.861] E0612 00:10:03.524420   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:04.861] I0612 00:10:03.643384   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298201-28817", Name:"deployment-with-unixuserid", UID:"cfc9f9db-1691-41c7-9b0a-294deb76ba6f", APIVersion:"apps/v1", ResourceVersion:"2008", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-56fc88c5d7 to 1
W0612 00:10:04.861] I0612 00:10:03.648388   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"deployment-with-unixuserid-56fc88c5d7", UID:"a8b8fec9-0295-420b-a43f-ac9efb7e8cb5", APIVersion:"apps/v1", ResourceVersion:"2009", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-56fc88c5d7-fkgdr
W0612 00:10:04.862] I0612 00:10:04.073326   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment", UID:"a214b3f1-afa9-4fcc-9249-1b7d3f10a2e8", APIVersion:"apps/v1", ResourceVersion:"2022", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-995fc88bd to 3
W0612 00:10:04.862] I0612 00:10:04.079119   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment-995fc88bd", UID:"8453792b-c688-4800-b007-4d306d780c3b", APIVersion:"apps/v1", ResourceVersion:"2023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-s5nzt
W0612 00:10:04.863] I0612 00:10:04.084100   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment-995fc88bd", UID:"8453792b-c688-4800-b007-4d306d780c3b", APIVersion:"apps/v1", ResourceVersion:"2023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-ltgl4
W0612 00:10:04.863] I0612 00:10:04.084721   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment-995fc88bd", UID:"8453792b-c688-4800-b007-4d306d780c3b", APIVersion:"apps/v1", ResourceVersion:"2023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-5st4t
W0612 00:10:04.863] E0612 00:10:04.236811   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:04.864] E0612 00:10:04.324526   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:04.864] E0612 00:10:04.430363   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:04.864] E0612 00:10:04.525799   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:04.864] I0612 00:10:04.585039   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment", UID:"d0687837-4871-4519-98ca-4d67be1bdf40", APIVersion:"apps/v1", ResourceVersion:"2044", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-9c45fbd9f to 1
W0612 00:10:04.865] I0612 00:10:04.592740   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment-9c45fbd9f", UID:"a16ee6a8-98d1-46e0-974f-42748455070e", APIVersion:"apps/v1", ResourceVersion:"2045", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-9c45fbd9f-nz5hr
I0612 00:10:04.965] apps.sh:263: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0612 00:10:04.976] (Bapps.sh:264: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I0612 00:10:05.129] (Breplicaset.extensions "nginx-deployment-9c45fbd9f" deleted
I0612 00:10:05.221] apps.sh:272: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 7 lines ...
I0612 00:10:06.086] (Bdeployment.apps/nginx created
I0612 00:10:06.186] apps.sh:290: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0612 00:10:06.270] (Bapps.sh:291: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0612 00:10:06.362] (Bdeployment.extensions/nginx skipped rollback (current template already matches revision 1)
I0612 00:10:06.451] apps.sh:294: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0612 00:10:06.624] (Bdeployment.apps/nginx configured
W0612 00:10:06.725] E0612 00:10:05.238080   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:06.725] E0612 00:10:05.325840   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:06.726] I0612 00:10:05.398374   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment", UID:"00b2d93c-ce8c-49d4-8533-3706de0e0711", APIVersion:"apps/v1", ResourceVersion:"2063", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-995fc88bd to 3
W0612 00:10:06.726] I0612 00:10:05.404698   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment-995fc88bd", UID:"00620896-c689-4f23-815f-6200a6db032e", APIVersion:"apps/v1", ResourceVersion:"2064", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-g26js
W0612 00:10:06.726] I0612 00:10:05.407920   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment-995fc88bd", UID:"00620896-c689-4f23-815f-6200a6db032e", APIVersion:"apps/v1", ResourceVersion:"2064", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-schxn
W0612 00:10:06.727] I0612 00:10:05.409956   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-deployment-995fc88bd", UID:"00620896-c689-4f23-815f-6200a6db032e", APIVersion:"apps/v1", ResourceVersion:"2064", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-pjjnq
W0612 00:10:06.727] E0612 00:10:05.431282   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:06.727] E0612 00:10:05.527273   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:06.727] I0612 00:10:06.092153   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298201-28817", Name:"nginx", UID:"9f044ad6-93f8-4227-9537-d039a6be062c", APIVersion:"apps/v1", ResourceVersion:"2088", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-74bc9cdd45 to 3
W0612 00:10:06.728] I0612 00:10:06.096528   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-74bc9cdd45", UID:"dd027aef-ea23-4f56-9049-dd03e7dab902", APIVersion:"apps/v1", ResourceVersion:"2089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74bc9cdd45-npcdb
W0612 00:10:06.728] I0612 00:10:06.099862   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-74bc9cdd45", UID:"dd027aef-ea23-4f56-9049-dd03e7dab902", APIVersion:"apps/v1", ResourceVersion:"2089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74bc9cdd45-whff6
W0612 00:10:06.728] I0612 00:10:06.100754   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-74bc9cdd45", UID:"dd027aef-ea23-4f56-9049-dd03e7dab902", APIVersion:"apps/v1", ResourceVersion:"2089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74bc9cdd45-zxgbm
W0612 00:10:06.728] E0612 00:10:06.239437   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:06.728] E0612 00:10:06.327299   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:06.729] E0612 00:10:06.432410   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:06.729] E0612 00:10:06.528735   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:06.729] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0612 00:10:06.729] I0612 00:10:06.634611   51422 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560298201-28817", Name:"nginx", UID:"9f044ad6-93f8-4227-9537-d039a6be062c", APIVersion:"apps/v1", ResourceVersion:"2102", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7f4df4ff49 to 1
W0612 00:10:06.730] I0612 00:10:06.640846   51422 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560298201-28817", Name:"nginx-7f4df4ff49", UID:"5692e521-2022-4830-a968-428ab529a5dc", APIVersion:"apps/v1", ResourceVersion:"2103", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7f4df4ff49-qjbvk
I0612 00:10:06.830] apps.sh:297: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0612 00:10:06.857] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0612 00:10:06.947] apps.sh:300: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0612 00:10:07.048] (Bdeployment.extensions/nginx rolled back
W0612 00:10:07.241] E0612 00:10:07.240888   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:07.329] E0612 00:10:07.328617   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:07.434] E0612 00:10:07.434100   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:07.531] E0612 00:10:07.530388   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:10:08.142] apps.sh:304: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0612 00:10:08.343] (Bapps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0612 00:10:08.456] (Bdeployment.extensions/nginx rolled back
W0612 00:10:08.557] E0612 00:10:08.243979   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:08.557] error: unable to find specified revision 1000000 in history
W0612 00:10:08.558] E0612 00:10:08.329907   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:08.558] E0612 00:10:08.435427   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:08.558] E0612 00:10:08.531708   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:09.246] E0612 00:10:09.245877   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:09.332] E0612 00:10:09.331719   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:09.437] E0612 00:10:09.437049   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:09.534] E0612 00:10:09.533299   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:10:09.634] apps.sh:311: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0612 00:10:09.661] (Bdeployment.extensions/nginx paused
W0612 00:10:09.781] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W0612 00:10:09.873] error: deployments.extensions "nginx" can't restart paused deployment (run rollout resume first)
I0612 00:10:09.974] deployment.extensions/nginx resumed
I0612 00:10:10.097] deployment.extensions/nginx rolled back
W0612 00:10:10.248] E0612 00:10:10.247399   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:10.334] E0612 00:10:10.333463   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0612 00:10:10.434]     deployment.kubernetes.io/revision-history: 1,3
W0612 00:10:10.535] E0612 00:10:10.438496   51422 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0612 00:10:10.535] error: desired revision (3) is different from the running revision (5)
W0612 00:10:10.53