This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 1692 succeeded
Started2019-06-11 06:40
Elapsed28m56s
Revision
Buildergke-prow-containerd-pool-bigger-170c3937-4bm7
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/491915cf-7c95-48da-ae3a-9d376c70c77e/targets/test'}}
podada065ee-8c13-11e9-9cdf-225cd3053b6a
resultstorehttps://source.cloud.google.com/results/invocations/491915cf-7c95-48da-ae3a-9d376c70c77e/targets/test
infra-commit1a71afd20
podada065ee-8c13-11e9-9cdf-225cd3053b6a
repok8s.io/kubernetes
repo-commit8de1569ddae62e8fab559fe6bd210a5d6100a277
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestUnreservePlugin 5.07s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestUnreservePlugin$
I0611 07:00:31.004113  109417 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0611 07:00:31.004195  109417 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0611 07:00:31.004245  109417 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0611 07:00:31.004262  109417 master.go:233] Using reconciler: 
I0611 07:00:31.007064  109417 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.007506  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.007525  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.007629  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.007905  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.008434  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.008551  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.008682  109417 store.go:1343] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0611 07:00:31.008720  109417 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.008751  109417 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0611 07:00:31.008945  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.008957  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.008994  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.009057  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.009563  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.010479  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.010916  109417 watch_cache.go:405] Replace watchCache (rev: 22206) 
I0611 07:00:31.013058  109417 store.go:1343] Monitoring events count at <storage-prefix>//events
I0611 07:00:31.013158  109417 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.013353  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.013405  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.013511  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.013595  109417 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0611 07:00:31.013916  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.014322  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.014612  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.014676  109417 store.go:1343] Monitoring limitranges count at <storage-prefix>//limitranges
I0611 07:00:31.014763  109417 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0611 07:00:31.015432  109417 watch_cache.go:405] Replace watchCache (rev: 22207) 
I0611 07:00:31.016725  109417 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.016823  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.016838  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.016872  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.016930  109417 watch_cache.go:405] Replace watchCache (rev: 22207) 
I0611 07:00:31.017021  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.019693  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.019875  109417 store.go:1343] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0611 07:00:31.020078  109417 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.020168  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.020186  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.020257  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.020301  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.020346  109417 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0611 07:00:31.020605  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.021179  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.021403  109417 store.go:1343] Monitoring secrets count at <storage-prefix>//secrets
I0611 07:00:31.021589  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.021591  109417 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.021730  109417 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0611 07:00:31.021791  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.021850  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.021892  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.022001  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.022308  109417 watch_cache.go:405] Replace watchCache (rev: 22207) 
I0611 07:00:31.022333  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.022466  109417 store.go:1343] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0611 07:00:31.022609  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.022625  109417 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.023061  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.023074  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.023112  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.023319  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.031921  109417 watch_cache.go:405] Replace watchCache (rev: 22207) 
I0611 07:00:31.032988  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.033147  109417 store.go:1343] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0611 07:00:31.033311  109417 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.033388  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.033398  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.033426  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.033468  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.033502  109417 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0611 07:00:31.033681  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.035111  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.035234  109417 watch_cache.go:405] Replace watchCache (rev: 22208) 
I0611 07:00:31.035245  109417 store.go:1343] Monitoring configmaps count at <storage-prefix>//configmaps
I0611 07:00:31.035542  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.035545  109417 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.035598  109417 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0611 07:00:31.035616  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.035627  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.035662  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.035763  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.035949  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.036074  109417 store.go:1343] Monitoring namespaces count at <storage-prefix>//namespaces
I0611 07:00:31.036195  109417 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.036327  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.036337  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.036368  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.036398  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.036423  109417 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0611 07:00:31.036653  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.036867  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.037203  109417 watch_cache.go:405] Replace watchCache (rev: 22208) 
I0611 07:00:31.037590  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.039203  109417 watch_cache.go:405] Replace watchCache (rev: 22208) 
I0611 07:00:31.039877  109417 store.go:1343] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0611 07:00:31.040012  109417 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.040071  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.040080  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.040112  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.040163  109417 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0611 07:00:31.040423  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.041438  109417 watch_cache.go:405] Replace watchCache (rev: 22208) 
I0611 07:00:31.041649  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.042007  109417 store.go:1343] Monitoring nodes count at <storage-prefix>//minions
I0611 07:00:31.042188  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.042263  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.042276  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.042309  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.042319  109417 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0611 07:00:31.042261  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.042457  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.042817  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.042928  109417 store.go:1343] Monitoring pods count at <storage-prefix>//pods
I0611 07:00:31.043095  109417 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.043172  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.043185  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.043240  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.043314  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.043387  109417 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0611 07:00:31.043632  109417 watch_cache.go:405] Replace watchCache (rev: 22208) 
I0611 07:00:31.043643  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.044111  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.044260  109417 store.go:1343] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0611 07:00:31.044445  109417 watch_cache.go:405] Replace watchCache (rev: 22208) 
I0611 07:00:31.044471  109417 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.044597  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.022664  109417 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0611 07:00:31.044670  109417 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0611 07:00:31.045101  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.045116  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.045147  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.045197  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.045459  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.045579  109417 store.go:1343] Monitoring services count at <storage-prefix>//services/specs
I0611 07:00:31.045610  109417 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.045716  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.045734  109417 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0611 07:00:31.046678  109417 watch_cache.go:405] Replace watchCache (rev: 22208) 
I0611 07:00:31.046800  109417 watch_cache.go:405] Replace watchCache (rev: 22208) 
I0611 07:00:31.046854  109417 watch_cache.go:405] Replace watchCache (rev: 22208) 
I0611 07:00:31.047232  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.047248  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.047281  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.047325  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.047610  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.047668  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.047715  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.047727  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.047754  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.047839  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.048526  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.048709  109417 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.048785  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.048796  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.048827  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.048865  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.048901  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.049532  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.049592  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.049664  109417 store.go:1343] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0611 07:00:31.049702  109417 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0611 07:00:31.050263  109417 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.050492  109417 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.051474  109417 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.051804  109417 watch_cache.go:405] Replace watchCache (rev: 22210) 
I0611 07:00:31.052301  109417 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.053196  109417 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.054268  109417 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.054756  109417 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.054911  109417 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.055152  109417 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.055687  109417 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.056517  109417 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.056783  109417 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.057552  109417 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.057844  109417 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.058586  109417 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.058756  109417 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.059462  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.059698  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.059890  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.060024  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.060168  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.060316  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.060483  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.061483  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.061785  109417 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.062607  109417 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.063686  109417 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.063970  109417 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.064274  109417 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.065288  109417 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.065636  109417 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.066468  109417 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.067256  109417 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.068050  109417 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.068871  109417 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.069180  109417 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.069375  109417 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0611 07:00:31.069404  109417 master.go:425] Enabling API group "authentication.k8s.io".
I0611 07:00:31.069421  109417 master.go:425] Enabling API group "authorization.k8s.io".
I0611 07:00:31.071681  109417 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.071800  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.071816  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.071851  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.071979  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.072414  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.072558  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.072773  109417 store.go:1343] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0611 07:00:31.072865  109417 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0611 07:00:31.073195  109417 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.073584  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.073659  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.073779  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.073863  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.074577  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.074626  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.074855  109417 store.go:1343] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0611 07:00:31.074902  109417 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0611 07:00:31.075312  109417 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.075401  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.075418  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.075474  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.075884  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.076882  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.077292  109417 store.go:1343] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0611 07:00:31.077327  109417 master.go:425] Enabling API group "autoscaling".
I0611 07:00:31.077668  109417 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.077747  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.077792  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.077827  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.077891  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.078007  109417 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0611 07:00:31.078056  109417 watch_cache.go:405] Replace watchCache (rev: 22213) 
I0611 07:00:31.078299  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.078746  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.078790  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.078906  109417 store.go:1343] Monitoring jobs.batch count at <storage-prefix>//jobs
I0611 07:00:31.078994  109417 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0611 07:00:31.079122  109417 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.079185  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.079196  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.079246  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.079324  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.080056  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.080186  109417 store.go:1343] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0611 07:00:31.080220  109417 master.go:425] Enabling API group "batch".
I0611 07:00:31.080379  109417 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.080456  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.080468  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.080498  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.080556  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.080635  109417 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0611 07:00:31.080837  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.081273  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.081383  109417 store.go:1343] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0611 07:00:31.081406  109417 master.go:425] Enabling API group "certificates.k8s.io".
I0611 07:00:31.081577  109417 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.081677  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.081695  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.081726  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.081796  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.081827  109417 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0611 07:00:31.082008  109417 watch_cache.go:405] Replace watchCache (rev: 22214) 
I0611 07:00:31.082038  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.082047  109417 watch_cache.go:405] Replace watchCache (rev: 22214) 
I0611 07:00:31.082072  109417 watch_cache.go:405] Replace watchCache (rev: 22214) 
I0611 07:00:31.082299  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.082343  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.082383  109417 store.go:1343] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0611 07:00:31.082418  109417 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0611 07:00:31.082535  109417 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.082600  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.082609  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.082641  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.082694  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.082891  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.082952  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.082999  109417 store.go:1343] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0611 07:00:31.083017  109417 master.go:425] Enabling API group "coordination.k8s.io".
I0611 07:00:31.083096  109417 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0611 07:00:31.083180  109417 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.083268  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.083283  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.083312  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.083362  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.083656  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.083764  109417 store.go:1343] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0611 07:00:31.083975  109417 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.084046  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.084064  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.084091  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.084149  109417 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0611 07:00:31.084414  109417 watch_cache.go:405] Replace watchCache (rev: 22214) 
I0611 07:00:31.084434  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.084528  109417 watch_cache.go:405] Replace watchCache (rev: 22214) 
I0611 07:00:31.084679  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.084733  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.084777  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.085168  109417 store.go:1343] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0611 07:00:31.085304  109417 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0611 07:00:31.085344  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.085359  109417 watch_cache.go:405] Replace watchCache (rev: 22214) 
I0611 07:00:31.085414  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.085424  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.085455  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.085595  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.085929  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.085964  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.086224  109417 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0611 07:00:31.086262  109417 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0611 07:00:31.086400  109417 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.086483  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.086496  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.086531  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.086575  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.087668  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.088152  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.088333  109417 store.go:1343] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0611 07:00:31.088441  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.088544  109417 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0611 07:00:31.088568  109417 watch_cache.go:405] Replace watchCache (rev: 22214) 
I0611 07:00:31.088602  109417 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.088703  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.088720  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.088750  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.088864  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.089097  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.089181  109417 watch_cache.go:405] Replace watchCache (rev: 22215) 
I0611 07:00:31.089239  109417 store.go:1343] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0611 07:00:31.089262  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.089282  109417 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0611 07:00:31.089408  109417 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.089472  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.089488  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.089531  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.089572  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.090009  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.090083  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.090160  109417 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0611 07:00:31.090111  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.090134  109417 store.go:1343] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0611 07:00:31.090619  109417 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.090715  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.090733  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.090764  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.090827  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.091428  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.091468  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.091504  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.091612  109417 store.go:1343] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0611 07:00:31.091634  109417 master.go:425] Enabling API group "extensions".
I0611 07:00:31.091692  109417 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0611 07:00:31.091780  109417 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.092092  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.092145  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.092177  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.092432  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.092718  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.092811  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.092845  109417 store.go:1343] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0611 07:00:31.092933  109417 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0611 07:00:31.093028  109417 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.093104  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.093126  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.093319  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.093369  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.093392  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.093598  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.093735  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.093782  109417 store.go:1343] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0611 07:00:31.093798  109417 master.go:425] Enabling API group "networking.k8s.io".
I0611 07:00:31.093831  109417 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0611 07:00:31.093839  109417 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.093929  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.093940  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.093966  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.094125  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.094434  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.094551  109417 store.go:1343] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0611 07:00:31.094567  109417 master.go:425] Enabling API group "node.k8s.io".
I0611 07:00:31.094721  109417 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.094826  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.094839  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.094868  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.094908  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.094959  109417 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0611 07:00:31.095175  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.095536  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.095645  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.095652  109417 store.go:1343] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0611 07:00:31.095693  109417 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0611 07:00:31.095792  109417 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.095863  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.095872  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.095899  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.095982  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.096261  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.096380  109417 store.go:1343] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0611 07:00:31.096408  109417 master.go:425] Enabling API group "policy".
I0611 07:00:31.096438  109417 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.096674  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.096689  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.096748  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.096806  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.096853  109417 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0611 07:00:31.097028  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.097403  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.097889  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.098024  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.098064  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.098190  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.098526  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.099024  109417 watch_cache.go:405] Replace watchCache (rev: 22216) 
I0611 07:00:31.105744  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.105924  109417 store.go:1343] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0611 07:00:31.106137  109417 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.106262  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.106276  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.106316  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.106378  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.106424  109417 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0611 07:00:31.106611  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.107032  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.107288  109417 store.go:1343] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0611 07:00:31.107329  109417 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.107413  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.107430  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.107471  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.107561  109417 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0611 07:00:31.107615  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.107775  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.108020  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.108094  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.108143  109417 store.go:1343] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0611 07:00:31.108277  109417 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0611 07:00:31.108375  109417 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.108438  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.108454  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.108489  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.108748  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.109040  109417 watch_cache.go:405] Replace watchCache (rev: 22217) 
I0611 07:00:31.109188  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.109280  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.109390  109417 store.go:1343] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0611 07:00:31.109486  109417 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0611 07:00:31.109520  109417 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.109619  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.109633  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.109667  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.109712  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.109966  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.110102  109417 store.go:1343] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0611 07:00:31.110332  109417 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.110467  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.110497  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.110563  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.110633  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.110698  109417 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0611 07:00:31.110865  109417 watch_cache.go:405] Replace watchCache (rev: 22217) 
I0611 07:00:31.111110  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.111148  109417 watch_cache.go:405] Replace watchCache (rev: 22217) 
I0611 07:00:31.111431  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.111500  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.111567  109417 store.go:1343] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0611 07:00:31.111587  109417 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0611 07:00:31.111593  109417 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.111670  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.111686  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.111737  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.111852  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.112563  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.112677  109417 store.go:1343] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0611 07:00:31.112856  109417 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.112879  109417 watch_cache.go:405] Replace watchCache (rev: 22218) 
I0611 07:00:31.112921  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.112931  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.112958  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.113077  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.113140  109417 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0611 07:00:31.113167  109417 watch_cache.go:405] Replace watchCache (rev: 22218) 
I0611 07:00:31.113619  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.113998  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.114090  109417 store.go:1343] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0611 07:00:31.114121  109417 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0611 07:00:31.114303  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.114377  109417 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0611 07:00:31.115104  109417 watch_cache.go:405] Replace watchCache (rev: 22218) 
I0611 07:00:31.116003  109417 watch_cache.go:405] Replace watchCache (rev: 22218) 
I0611 07:00:31.117782  109417 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.117994  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.118073  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.118154  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.118258  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.118606  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.118832  109417 store.go:1343] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0611 07:00:31.119135  109417 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.119279  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.119321  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.119667  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.119846  109417 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0611 07:00:31.120647  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.120721  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.120926  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.121039  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.121055  109417 store.go:1343] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0611 07:00:31.121067  109417 master.go:425] Enabling API group "scheduling.k8s.io".
I0611 07:00:31.121086  109417 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0611 07:00:31.121168  109417 master.go:417] Skipping disabled API group "settings.k8s.io".
I0611 07:00:31.121381  109417 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.121459  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.121467  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.121491  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.121757  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.122021  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.122100  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.122245  109417 store.go:1343] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0611 07:00:31.122377  109417 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0611 07:00:31.123267  109417 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.123496  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.123543  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.123613  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.123691  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.124018  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.124194  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.124390  109417 store.go:1343] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0611 07:00:31.124456  109417 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.124538  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.124576  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.124678  109417 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0611 07:00:31.125012  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.125164  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.125712  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.125793  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.125964  109417 store.go:1343] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0611 07:00:31.126022  109417 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.126297  109417 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0611 07:00:31.128356  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.128407  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.128484  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.128674  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.129001  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.129058  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.129097  109417 store.go:1343] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0611 07:00:31.129397  109417 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.129486  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.129507  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.129563  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.129119  109417 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0611 07:00:31.129849  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.130127  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.130275  109417 store.go:1343] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0611 07:00:31.130445  109417 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.130668  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.130701  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.130737  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.130827  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.130867  109417 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0611 07:00:31.131129  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.131384  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.131456  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.131566  109417 store.go:1343] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0611 07:00:31.131611  109417 master.go:425] Enabling API group "storage.k8s.io".
I0611 07:00:31.131740  109417 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0611 07:00:31.131875  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.131991  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.132049  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.132102  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.132185  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.132480  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.132557  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.132625  109417 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0611 07:00:31.132687  109417 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0611 07:00:31.132804  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.132890  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.132911  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.132947  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.133062  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.133588  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.133676  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.133726  109417 store.go:1343] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0611 07:00:31.133781  109417 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0611 07:00:31.133914  109417 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.133998  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.134016  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.134051  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.134133  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.134460  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.134517  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.134600  109417 store.go:1343] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0611 07:00:31.134700  109417 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0611 07:00:31.134802  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.134888  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.134908  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.134981  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.135038  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.135263  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.135478  109417 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0611 07:00:31.135507  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.135682  109417 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0611 07:00:31.135704  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.136042  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.136052  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.136074  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.136145  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.137079  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.137321  109417 store.go:1343] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0611 07:00:31.137506  109417 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.137577  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.137596  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.137640  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.137693  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.137784  109417 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0611 07:00:31.138062  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.138290  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.138538  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.138784  109417 store.go:1343] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0611 07:00:31.138907  109417 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0611 07:00:31.139110  109417 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.139255  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.139296  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.139330  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.139399  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.139687  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.139852  109417 store.go:1343] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0611 07:00:31.140130  109417 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.140222  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.140261  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.140295  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.140416  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.140470  109417 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0611 07:00:31.140702  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.141244  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.141739  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.141858  109417 store.go:1343] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0611 07:00:31.142028  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.142084  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.142096  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.142122  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.142154  109417 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0611 07:00:31.142358  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.142549  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.142675  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.142707  109417 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0611 07:00:31.142874  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.142937  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.143007  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.143057  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.143126  109417 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0611 07:00:31.144280  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.144531  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.144616  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.144708  109417 store.go:1343] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0611 07:00:31.144816  109417 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0611 07:00:31.144868  109417 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.144915  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.144921  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.144955  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.145020  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.145359  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.145415  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.145508  109417 store.go:1343] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0611 07:00:31.145640  109417 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0611 07:00:31.145680  109417 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.145731  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.145738  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.145759  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.145869  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.146138  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.146249  109417 store.go:1343] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0611 07:00:31.146286  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.146328  109417 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0611 07:00:31.146464  109417 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.146535  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.146550  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.146571  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.146632  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.147024  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.147151  109417 store.go:1343] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0611 07:00:31.147179  109417 master.go:425] Enabling API group "apps".
I0611 07:00:31.147238  109417 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.147320  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.147365  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.147405  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.147473  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.147520  109417 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0611 07:00:31.148405  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.148727  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.148828  109417 store.go:1343] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0611 07:00:31.148856  109417 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.148894  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.148983  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.148996  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.149023  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.149122  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.149144  109417 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0611 07:00:31.149395  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.149502  109417 store.go:1343] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0611 07:00:31.149520  109417 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0611 07:00:31.149547  109417 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.149661  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.149714  109417 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0611 07:00:31.149751  109417 client.go:354] parsed scheme: ""
I0611 07:00:31.149772  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:31.149808  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:31.150019  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.150391  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:31.150520  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:31.150644  109417 store.go:1343] Monitoring events count at <storage-prefix>//events
I0611 07:00:31.150686  109417 master.go:425] Enabling API group "events.k8s.io".
I0611 07:00:31.150697  109417 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0611 07:00:31.151093  109417 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.151824  109417 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.153719  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.153277  109417 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.153876  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.153950  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.154006  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.154238  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.154344  109417 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.154578  109417 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.154708  109417 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.154779  109417 watch_cache.go:405] Replace watchCache (rev: 22220) 
I0611 07:00:31.155191  109417 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.155248  109417 watch_cache.go:405] Replace watchCache (rev: 22220) 
I0611 07:00:31.155408  109417 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.155536  109417 watch_cache.go:405] Replace watchCache (rev: 22220) 
I0611 07:00:31.155541  109417 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.155677  109417 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.155690  109417 watch_cache.go:405] Replace watchCache (rev: 22220) 
I0611 07:00:31.156723  109417 watch_cache.go:405] Replace watchCache (rev: 22220) 
I0611 07:00:31.169727  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170096  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170109  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170181  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170401  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170445  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170690  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170710  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170810  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170915  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.171705  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.172001  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.172352  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.170100  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.173581  109417 watch_cache.go:405] Replace watchCache (rev: 22221) 
I0611 07:00:31.183696  109417 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.184388  109417 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.186496  109417 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.187138  109417 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.189370  109417 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.190097  109417 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.196329  109417 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.198681  109417 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.199618  109417 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.199974  109417 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 07:00:31.200247  109417 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
I0611 07:00:31.202231  109417 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.203138  109417 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.205025  109417 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.205895  109417 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.207096  109417 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.210665  109417 storage_factory.go:285] storing daemonsets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.210984  109417 storage_factory.go:285] storing daemonsets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.211636  109417 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.211866  109417 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.212136  109417 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.212408  109417 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.213549  109417 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.214797  109417 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.215419  109417 storage_factory.go:285] storing networkpolicies.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.216031  109417 storage_factory.go:285] storing podsecuritypolicies.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.216657  109417 storage_factory.go:285] storing replicasets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.216884  109417 storage_factory.go:285] storing replicasets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.217131  109417 storage_factory.go:285] storing replicasets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.217199  109417 storage_factory.go:285] storing replicationcontrollers.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.217399  109417 storage_factory.go:285] storing replicationcontrollers.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.218304  109417 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.219166  109417 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.219384  109417 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.220029  109417 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 07:00:31.220096  109417 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0611 07:00:31.220820  109417 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.221075  109417 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.221508  109417 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.222148  109417 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.222653  109417 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.223365  109417 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.224033  109417 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.224790  109417 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.225390  109417 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.226136  109417 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.226881  109417 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 07:00:31.226968  109417 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0611 07:00:31.227649  109417 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.228144  109417 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 07:00:31.228228  109417 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0611 07:00:31.228789  109417 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.229287  109417 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.229574  109417 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.230076  109417 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.230447  109417 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.230873  109417 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.231417  109417 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 07:00:31.231471  109417 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0611 07:00:31.232052  109417 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.232736  109417 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.232930  109417 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.233459  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.233670  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.233830  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.234397  109417 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.234646  109417 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.234822  109417 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.235330  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.235621  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.235864  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.236513  109417 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.236993  109417 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.237181  109417 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.237658  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.237873  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.238048  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.238547  109417 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.238717  109417 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.238946  109417 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.239693  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.239971  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.240226  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.240885  109417 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.241596  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.241760  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.241941  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.242128  109417 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.242693  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.242865  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.244882  109417 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.245425  109417 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.246016  109417 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.246846  109417 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87f3ec20-0a61-4aa8-9363-f53a61c4c11a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 07:00:31.248799  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.248840  109417 healthz.go:161] healthz check poststarthook/bootstrap-controller failed: not finished
I0611 07:00:31.248849  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.248859  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.248867  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.248879  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.249041  109417 wrap.go:47] GET /healthz: (331.619µs) 500
goroutine 10463 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0064ea000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0064ea000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004988060, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc000be20c8, 0xc00005e1a0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998100)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc000be20c8, 0xc003998100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007dba300, 0xc007edc780, 0x74261e0, 0xc000be20c8, 0xc003998100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:31.250362  109417 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.775731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:31.252991  109417 wrap.go:47] GET /api/v1/services: (981.696µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:31.257018  109417 wrap.go:47] GET /api/v1/services: (1.089987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:31.259294  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.259366  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.259406  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.259423  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.259436  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.259590  109417 wrap.go:47] GET /healthz: (427.971µs) 500
goroutine 10869 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0064ea930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0064ea930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004988f00, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc000be23c0, 0xc00295aa80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998e00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc000be23c0, 0xc003998e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007dba7e0, 0xc007edc780, 0x74261e0, 0xc000be23c0, 0xc003998e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:31.260420  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.242493ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.260948  109417 wrap.go:47] GET /api/v1/services: (936.624µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:31.262290  109417 wrap.go:47] GET /api/v1/services: (1.64043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.262586  109417 wrap.go:47] POST /api/v1/namespaces: (1.786768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38238]
I0611 07:00:31.264133  109417 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.138801ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.266019  109417 wrap.go:47] POST /api/v1/namespaces: (1.473581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.267529  109417 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.092359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.269400  109417 wrap.go:47] POST /api/v1/namespaces: (1.336648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.349861  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.349973  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.350012  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.350036  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.350056  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.350314  109417 wrap.go:47] GET /healthz: (571.951µs) 500
goroutine 10886 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082f8620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082f8620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e6e20, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002fdc090, 0xc003798480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52a00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002fdc090, 0xc002d52a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007b72ea0, 0xc007edc780, 0x74261e0, 0xc002fdc090, 0xc002d52a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:31.360428  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.360466  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.360477  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.360507  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.360559  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.360699  109417 wrap.go:47] GET /healthz: (387.672µs) 500
goroutine 10857 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005bdf730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005bdf730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e0620, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc00282a6f8, 0xc0037ba300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd200)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc00282a6f8, 0xc002edd200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007e576e0, 0xc007edc780, 0x74261e0, 0xc00282a6f8, 0xc002edd200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.449908  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.450008  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.450020  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.450026  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.450032  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.450195  109417 wrap.go:47] GET /healthz: (422.858µs) 500
goroutine 10888 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082f8700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082f8700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e6ec0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002fdc098, 0xc003798a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d53000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d52f00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002fdc098, 0xc002d52f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007b72f60, 0xc007edc780, 0x74261e0, 0xc002fdc098, 0xc002d52f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:31.460395  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.460438  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.460452  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.460462  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.460468  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.460607  109417 wrap.go:47] GET /healthz: (328.634µs) 500
goroutine 10890 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082f8930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082f8930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e6f60, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc003799080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53300)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002fdc0a0, 0xc002d53300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007b73320, 0xc007edc780, 0x74261e0, 0xc002fdc0a0, 0xc002d53300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.551254  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.551309  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.551340  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.551387  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.551407  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.551702  109417 wrap.go:47] GET /healthz: (1.073718ms) 500
goroutine 10906 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00606d490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00606d490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048d3ce0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0012fa278, 0xc004442c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52200)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0012fa278, 0xc002b52200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007aa3e00, 0xc007edc780, 0x74261e0, 0xc0012fa278, 0xc002b52200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:31.560429  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.560467  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.560479  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.560487  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.560495  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.560648  109417 wrap.go:47] GET /healthz: (336.976µs) 500
goroutine 10908 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00606d5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00606d5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048d3f60, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0012fa288, 0xc004443680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52800)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0012fa288, 0xc002b52800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007aa3f20, 0xc007edc780, 0x74261e0, 0xc0012fa288, 0xc002b52800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.649805  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.649836  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.649845  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.649852  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.649857  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.650023  109417 wrap.go:47] GET /healthz: (333.019µs) 500
goroutine 10859 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005bdf880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005bdf880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e0ac0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc00282a740, 0xc0037bac00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc00282a740, 0xc002aa2000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc00282a740, 0xc002eddf00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc00282a740, 0xc002eddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007e579e0, 0xc007edc780, 0x74261e0, 0xc00282a740, 0xc002eddf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:31.660655  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.660736  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.660754  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.660769  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.660782  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.660975  109417 wrap.go:47] GET /healthz: (425.534µs) 500
goroutine 10861 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005bdfa40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005bdfa40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e0c60, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc00282a768, 0xc0037bb200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3200)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc00282a768, 0xc002aa3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007e57b60, 0xc007edc780, 0x74261e0, 0xc00282a768, 0xc002aa3200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.749907  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.749947  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.749961  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.749970  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.749978  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.750101  109417 wrap.go:47] GET /healthz: (321.549µs) 500
goroutine 10874 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0064eafc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0064eafc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048ba3e0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc000be2518, 0xc00295b500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12300)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc000be2518, 0xc002c12300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007dbb020, 0xc007edc780, 0x74261e0, 0xc000be2518, 0xc002c12300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:31.761124  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.761157  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.761166  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.761173  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.761179  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.761321  109417 wrap.go:47] GET /healthz: (305.234µs) 500
goroutine 10863 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005bdfb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005bdfb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e0d60, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc00282a790, 0xc0037bb800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0300)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc00282a790, 0xc0029a0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007e57ce0, 0xc007edc780, 0x74261e0, 0xc00282a790, 0xc0029a0300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.849916  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.849986  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.849997  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.850004  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.850010  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.850141  109417 wrap.go:47] GET /healthz: (372.469µs) 500
goroutine 10865 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005bdfce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005bdfce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e0e00, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc00282a798, 0xc0037bbe00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0b00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc00282a798, 0xc0029a0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007e57da0, 0xc007edc780, 0x74261e0, 0xc00282a798, 0xc0029a0b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:31.861795  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.861831  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.861841  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.861848  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.861855  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.861977  109417 wrap.go:47] GET /healthz: (333.829µs) 500
goroutine 10915 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005bdfea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005bdfea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e0ea0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc00282a7a0, 0xc003828480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a0f00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc00282a7a0, 0xc0029a0f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007e57e60, 0xc007edc780, 0x74261e0, 0xc00282a7a0, 0xc0029a0f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:31.949860  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.949894  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.949903  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.949910  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.949916  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.950052  109417 wrap.go:47] GET /healthz: (306.848µs) 500
goroutine 10892 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082f8f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082f8f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e78c0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002fdc180, 0xc003799c80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002fdc180, 0xc0012a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002fdc180, 0xc0013dbb00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002fdc180, 0xc0013dbb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007b73bc0, 0xc007edc780, 0x74261e0, 0xc002fdc180, 0xc0013dbb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:31.960388  109417 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 07:00:31.960425  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:31.960437  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:31.960448  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:31.960460  109417 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:31.960628  109417 wrap.go:47] GET /healthz: (375.831µs) 500
goroutine 10894 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082f90a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082f90a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e7c00, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002fdc188, 0xc003832480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5600)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002fdc188, 0xc0012a5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007b73ce0, 0xc007edc780, 0x74261e0, 0xc002fdc188, 0xc0012a5600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.005732  109417 client.go:354] parsed scheme: ""
I0611 07:00:32.005765  109417 client.go:354] scheme "" not registered, fallback to default scheme
I0611 07:00:32.005836  109417 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 07:00:32.005920  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:32.006385  109417 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 07:00:32.006473  109417 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 07:00:32.050907  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.050937  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:32.050948  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:32.050956  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:32.051096  109417 wrap.go:47] GET /healthz: (1.399416ms) 500
goroutine 10876 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0064eb420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0064eb420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048baaa0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc000be2540, 0xc0001011e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12c00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc000be2540, 0xc002c12c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007dbb2c0, 0xc007edc780, 0x74261e0, 0xc000be2540, 0xc002c12c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:32.061500  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.061539  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:32.061550  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:32.061558  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:32.061719  109417 wrap.go:47] GET /healthz: (1.484664ms) 500
goroutine 10917 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00830a2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00830a2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e12a0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0049a89a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1700)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc00282a7d8, 0xc0029a1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007ce0360, 0xc007edc780, 0x74261e0, 0xc00282a7d8, 0xc0029a1700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.150859  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.150897  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:32.150908  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:32.150916  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:32.151086  109417 wrap.go:47] GET /healthz: (1.325561ms) 500
goroutine 10939 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082f9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082f9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0047b42a0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002fdc220, 0xc000101600, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193b700)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002fdc220, 0xc00193b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007de1e00, 0xc007edc780, 0x74261e0, 0xc002fdc220, 0xc00193b700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:32.161539  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.161596  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:32.161611  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:32.161626  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:32.161844  109417 wrap.go:47] GET /healthz: (1.589536ms) 500
goroutine 10919 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00830a460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00830a460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048e1600, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc00282a818, 0xc00330a6e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc00282a818, 0xc00022c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc00282a818, 0xc0029a1f00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc00282a818, 0xc0029a1f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007ce12c0, 0xc007edc780, 0x74261e0, 0xc00282a818, 0xc0029a1f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.250956  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.827095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:32.251052  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.933256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.251084  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.251109  109417 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 07:00:32.251119  109417 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 07:00:32.251134  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 07:00:32.251314  109417 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.758698ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.251316  109417 wrap.go:47] GET /healthz: (1.567643ms) 500
goroutine 10607 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007f52690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007f52690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00498ed80, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0019c81a8, 0xc000101a20, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42c00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0019c81a8, 0xc003f42c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007e60a20, 0xc007edc780, 0x74261e0, 0xc0019c81a8, 0xc003f42c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38254]
I0611 07:00:32.253109  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (941.358µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:32.253798  109417 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.1387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.254172  109417 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0611 07:00:32.254669  109417 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.884535ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.255848  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.534302ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:32.256907  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (718.228µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:32.257235  109417 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.220425ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.258738  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.553395ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:32.259578  109417 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.016006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.260047  109417 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (4.776223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.260244  109417 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0611 07:00:32.260268  109417 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0611 07:00:32.261180  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.136111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38236]
I0611 07:00:32.261341  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.261368  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.261529  109417 wrap.go:47] GET /healthz: (1.458738ms) 500
goroutine 10964 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005856c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005856c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0049269c0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f145e0, 0xc003e383c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8600)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f145e0, 0xc000be8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007ddf3e0, 0xc007edc780, 0x74261e0, 0xc001f145e0, 0xc000be8600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38254]
I0611 07:00:32.263366  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.247083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.264550  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (845.957µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.265546  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (734.489µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.266859  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (992.061µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.268749  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.484814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.268967  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0611 07:00:32.269976  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (870.128µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.271851  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.470414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.272030  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0611 07:00:32.273164  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (894.778µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.274657  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.137502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.275173  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0611 07:00:32.276148  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (711.475µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.278039  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.374261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.278299  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0611 07:00:32.279819  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.279705ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.281831  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.624926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.282188  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0611 07:00:32.283947  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.215392ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.285798  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.496021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.287383  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0611 07:00:32.288673  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.087331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.290714  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.631748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.290916  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0611 07:00:32.292573  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.362551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.294480  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.499564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.295073  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0611 07:00:32.296030  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (734.209µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.297998  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.419774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.298256  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0611 07:00:32.299643  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (854.596µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.302021  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.93092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.302436  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0611 07:00:32.303373  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (748.053µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.306522  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.799201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.306712  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0611 07:00:32.308123  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.262195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.310127  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.608785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.310433  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0611 07:00:32.311463  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (873.109µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.313248  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.460847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.313418  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0611 07:00:32.314350  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (740.54µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.316060  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.31751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.316258  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0611 07:00:32.317199  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (767.138µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.319110  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.319370  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0611 07:00:32.320434  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (907.786µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.322254  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.441608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.322445  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0611 07:00:32.323606  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.024186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.325378  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.419324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.325566  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0611 07:00:32.327098  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.292193ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.328607  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.167231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.328805  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0611 07:00:32.329890  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (869.041µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.331735  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.450553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.331976  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0611 07:00:32.332898  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (689.519µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.335156  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.746782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.335704  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0611 07:00:32.336738  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (841.745µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.338197  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.122958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.338383  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0611 07:00:32.339357  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (777.412µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.341353  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.481588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.341704  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0611 07:00:32.342842  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (833.489µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.344458  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.228225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.344716  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0611 07:00:32.347002  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (2.071571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.349099  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.701627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.349465  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0611 07:00:32.350545  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.350572  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.350716  109417 wrap.go:47] GET /healthz: (1.143312ms) 500
goroutine 11102 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008410cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008410cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0044f7c60, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002648810, 0xc00458ec80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4400)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002648810, 0xc007fd4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007115440, 0xc007edc780, 0x74261e0, 0xc002648810, 0xc007fd4400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38252]
I0611 07:00:32.350887  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.242977ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.353177  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.803462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.353415  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0611 07:00:32.354523  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (901.284µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.356520  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.598641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.356849  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0611 07:00:32.358196  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (931.43µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.360201  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.514003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.360629  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0611 07:00:32.361072  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.361133  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.361293  109417 wrap.go:47] GET /healthz: (1.027156ms) 500
goroutine 11111 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083d9b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083d9b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00448aaa0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0019c8c18, 0xc004258280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6400)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0019c8c18, 0xc0086f6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00702aea0, 0xc007edc780, 0x74261e0, 0xc0019c8c18, 0xc0086f6400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.361613  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (819.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.363631  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.627836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.363903  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0611 07:00:32.364951  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (828.713µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.366985  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.518698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.367111  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0611 07:00:32.368256  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (983.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.369845  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.370246  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0611 07:00:32.371539  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.097885ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.373555  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.707006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.373813  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0611 07:00:32.376464  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (2.306083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.382587  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.456695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.383234  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0611 07:00:32.384641  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.13932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.389405  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.012192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.389851  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0611 07:00:32.392358  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (2.043064ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.394353  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.379492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.394553  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0611 07:00:32.395442  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (773.591µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.397201  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.349434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.397396  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0611 07:00:32.398488  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (888.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.400589  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.575101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.400817  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0611 07:00:32.402186  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (995.445µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.404978  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.212463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.405160  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0611 07:00:32.406167  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (798.578µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.409371  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.8163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.409647  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0611 07:00:32.410790  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (800.704µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.412874  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.610945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.413118  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0611 07:00:32.414049  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (727.954µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.415781  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.393241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.416002  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0611 07:00:32.417414  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.156377ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.419266  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.490669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.419435  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0611 07:00:32.420362  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (788.187µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.421953  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.184151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.422095  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0611 07:00:32.423958  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.638223ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.425503  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.220945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.425818  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0611 07:00:32.426865  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (792.021µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.429525  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.319436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.429845  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0611 07:00:32.430899  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (761.174µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.432463  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.126454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.432765  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0611 07:00:32.433856  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (833.341µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.435726  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.328518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.435965  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0611 07:00:32.437133  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (843.556µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.438820  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.274304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.439100  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0611 07:00:32.440036  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (669.623µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.442494  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.900414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.442696  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0611 07:00:32.443856  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (966.912µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.445872  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.616301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.446241  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0611 07:00:32.447475  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (883.157µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.449019  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.190357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.449403  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0611 07:00:32.450278  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (664.698µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.450362  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.450388  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.450578  109417 wrap.go:47] GET /healthz: (978.148µs) 500
goroutine 11245 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084b6770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084b6770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003ff5700, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0004eb018, 0xc004738500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c100)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0004eb018, 0xc005e2c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006499da0, 0xc007edc780, 0x74261e0, 0xc0004eb018, 0xc005e2c100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:32.461776  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.461813  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.461979  109417 wrap.go:47] GET /healthz: (1.691865ms) 500
goroutine 11247 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084b68c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084b68c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003ff59c0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0004eb030, 0xc004032a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c600)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0004eb030, 0xc005e2c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0048d8420, 0xc007edc780, 0x74261e0, 0xc0004eb030, 0xc005e2c600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.470910  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.947624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.471339  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0611 07:00:32.490360  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (981.628µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.510896  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.933604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.511187  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0611 07:00:32.530387  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.338858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.550724  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.550759  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.550925  109417 wrap.go:47] GET /healthz: (1.28005ms) 500
goroutine 10966 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005857490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005857490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004927b20, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f148c8, 0xc004258a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f148c8, 0xc00633a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f148c8, 0xc000be9d00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f148c8, 0xc000be9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007ddfa40, 0xc007edc780, 0x74261e0, 0xc001f148c8, 0xc000be9d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38252]
I0611 07:00:32.551510  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.320674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.551764  109417 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0611 07:00:32.561534  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.561675  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.561864  109417 wrap.go:47] GET /healthz: (1.561457ms) 500
goroutine 11269 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084aa690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084aa690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f4e340, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002fdd790, 0xc00458f180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82b00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002fdd790, 0xc005c82b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004cc0540, 0xc007edc780, 0x74261e0, 0xc002fdd790, 0xc005c82b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.570358  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.334985ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.591710  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.595851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.592044  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0611 07:00:32.610697  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.792002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.631035  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.651308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.631310  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0611 07:00:32.650754  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.420009ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.651183  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.651201  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.651381  109417 wrap.go:47] GET /healthz: (1.792402ms) 500
goroutine 11288 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084d48c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084d48c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f49080, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0004eb478, 0xc00458f680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377000)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0004eb478, 0xc006377000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00488a5a0, 0xc007edc780, 0x74261e0, 0xc0004eb478, 0xc006377000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38252]
I0611 07:00:32.661795  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.661908  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.662157  109417 wrap.go:47] GET /healthz: (1.912592ms) 500
goroutine 11271 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084aaa80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084aaa80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f4ee40, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc00458fcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83500)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002fdd7f8, 0xc005c83500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004cc0a80, 0xc007edc780, 0x74261e0, 0xc002fdd7f8, 0xc005c83500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.718579  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (49.55974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.718832  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0611 07:00:32.740372  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (21.330206ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.750873  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.750901  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.751031  109417 wrap.go:47] GET /healthz: (1.310473ms) 500
goroutine 10976 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084e44d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084e44d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003ee35c0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f14a20, 0xc0039663c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b700)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f14a20, 0xc00633b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004c21320, 0xc007edc780, 0x74261e0, 0xc001f14a20, 0xc00633b700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:32.767664  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.767695  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.767869  109417 wrap.go:47] GET /healthz: (4.473149ms) 500
goroutine 11290 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084d4a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084d4a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f493e0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0004eb488, 0xc004738b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377400)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0004eb488, 0xc006377400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00488a840, 0xc007edc780, 0x74261e0, 0xc0004eb488, 0xc006377400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.788136  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (47.234205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.788429  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0611 07:00:32.790120  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.477976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.792234  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.694396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.792474  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0611 07:00:32.793509  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (881.849µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.795590  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.763507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.795880  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0611 07:00:32.810248  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.186399ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.831069  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.951104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.831375  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0611 07:00:32.850632  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.850663  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.850809  109417 wrap.go:47] GET /healthz: (1.192334ms) 500
goroutine 11297 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084d52d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084d52d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003e06aa0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0004eb608, 0xc003966c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fe900)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0004eb608, 0xc0064fe900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004a8a840, 0xc007edc780, 0x74261e0, 0xc0004eb608, 0xc0064fe900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:32.851232  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.181371ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.861332  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.861374  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.861527  109417 wrap.go:47] GET /healthz: (1.264939ms) 500
goroutine 11315 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084d53b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084d53b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003e06d40, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0004eb638, 0xc003cfe500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff000)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0004eb638, 0xc0064ff000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004a8aba0, 0xc007edc780, 0x74261e0, 0xc0004eb638, 0xc0064ff000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.871398  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.344026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.871751  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0611 07:00:32.890041  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.071912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.911523  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.382241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.911803  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0611 07:00:32.930035  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.138417ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.951500  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.463933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:32.952091  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0611 07:00:32.952173  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.952196  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.952385  109417 wrap.go:47] GET /healthz: (1.502542ms) 500
goroutine 11306 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085048c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085048c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d9e760, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f14d28, 0xc0047392c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547500)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f14d28, 0xc006547500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00465bd40, 0xc007edc780, 0x74261e0, 0xc001f14d28, 0xc006547500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:32.961360  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:32.961394  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:32.961552  109417 wrap.go:47] GET /healthz: (1.239329ms) 500
goroutine 11073 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084fa930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084fa930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d4b160, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc00282bb68, 0xc0012b2dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d000)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc00282bb68, 0xc00666d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00460b080, 0xc007edc780, 0x74261e0, 0xc00282bb68, 0xc00666d000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.970098  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.179971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.991336  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.432667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:32.991710  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0611 07:00:33.010640  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.636365ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.031482  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.532057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.031732  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0611 07:00:33.050320  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.395951ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.050717  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.050741  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.050891  109417 wrap.go:47] GET /healthz: (1.315472ms) 500
goroutine 11362 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084c1180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084c1180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003cba520, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc004033040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec300)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0012fbdc8, 0xc006bec300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004c07080, 0xc007edc780, 0x74261e0, 0xc0012fbdc8, 0xc006bec300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38252]
I0611 07:00:33.075740  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.075770  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.075891  109417 wrap.go:47] GET /healthz: (7.214587ms) 500
goroutine 11308 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085051f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085051f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d9f180, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f14d98, 0xc004033540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547d00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f14d98, 0xc006547d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004340300, 0xc007edc780, 0x74261e0, 0xc001f14d98, 0xc006547d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.076012  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.529427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.076200  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0611 07:00:33.091028  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (2.052706ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.110950  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.975968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.111148  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0611 07:00:33.130546  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.50939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.151470  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.151499  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.151598  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.584032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.151652  109417 wrap.go:47] GET /healthz: (2.148636ms) 500
goroutine 11367 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084c1a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084c1a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003cbbc80, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0012fbe88, 0xc004033cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed100)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0012fbe88, 0xc006bed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004c07c80, 0xc007edc780, 0x74261e0, 0xc0012fbe88, 0xc006bed100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38252]
I0611 07:00:33.151858  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0611 07:00:33.161841  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.161878  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.162027  109417 wrap.go:47] GET /healthz: (1.773154ms) 500
goroutine 11319 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084d5d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084d5d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003e07dc0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0004eb750, 0xc0012b3400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164200)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0004eb750, 0xc007164200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004a8b680, 0xc007edc780, 0x74261e0, 0xc0004eb750, 0xc007164200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.170292  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.265053ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.192072  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.074022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.192548  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0611 07:00:33.210171  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.206367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.231143  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.072626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.231366  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0611 07:00:33.250288  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.336802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.250668  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.250695  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.250839  109417 wrap.go:47] GET /healthz: (1.251465ms) 500
goroutine 11310 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008505650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008505650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d9fce0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f14e48, 0xc003cfef00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76400)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f14e48, 0xc006c76400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0043414a0, 0xc007edc780, 0x74261e0, 0xc001f14e48, 0xc006c76400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38252]
I0611 07:00:33.261225  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.261259  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.261398  109417 wrap.go:47] GET /healthz: (1.156981ms) 500
goroutine 11371 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085263f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085263f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003b7eec0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0012fbf60, 0xc000a19400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0012fbf60, 0xc005cd8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0012fbf60, 0xc006bedf00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0012fbf60, 0xc006bedf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00423b260, 0xc007edc780, 0x74261e0, 0xc0012fbf60, 0xc006bedf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.270985  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.987447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.271295  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0611 07:00:33.290496  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.519548ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.311043  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.751719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.311416  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0611 07:00:33.330480  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.441283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.350913  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.920262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.351091  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.351139  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.351162  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0611 07:00:33.351314  109417 wrap.go:47] GET /healthz: (1.665299ms) 500
goroutine 11312 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008505730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008505730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d9ffa0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f14e78, 0xc004739900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76b00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f14e78, 0xc006c76b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003d16600, 0xc007edc780, 0x74261e0, 0xc001f14e78, 0xc006c76b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:33.361532  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.361565  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.361716  109417 wrap.go:47] GET /healthz: (1.454297ms) 500
goroutine 11380 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008535030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008535030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003aa1f20, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc009a706e8, 0xc003cff7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f300)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc009a706e8, 0xc00765f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003943a40, 0xc007edc780, 0x74261e0, 0xc009a706e8, 0xc00765f300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.370451  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.474498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.391038  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.006267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.391339  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0611 07:00:33.410439  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.397795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.431169  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.223931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.431477  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0611 07:00:33.450167  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.200662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.451041  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.451065  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.451248  109417 wrap.go:47] GET /healthz: (1.547966ms) 500
goroutine 11377 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085277a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085277a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003a95240, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0024a8158, 0xc000a19cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9800)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0024a8158, 0xc005cd9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0035c1aa0, 0xc007edc780, 0x74261e0, 0xc0024a8158, 0xc005cd9800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38252]
I0611 07:00:33.461898  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.461938  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.462093  109417 wrap.go:47] GET /healthz: (1.739089ms) 500
goroutine 11427 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008527b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008527b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003a95ac0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0024a81a8, 0xc0040e03c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9e00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0024a81a8, 0xc005cd9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001102120, 0xc007edc780, 0x74261e0, 0xc0024a81a8, 0xc005cd9e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.471354  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.445686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.471903  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0611 07:00:33.490547  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.554359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.511545  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.547862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.511866  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0611 07:00:33.530722  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.658802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.552373  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.552409  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.552425  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.418371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.552552  109417 wrap.go:47] GET /healthz: (2.982913ms) 500
goroutine 11447 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008545a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008545a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00398eee0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0004ebd10, 0xc0040e0a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3300)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0004ebd10, 0xc007fc3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0040ab740, 0xc007edc780, 0x74261e0, 0xc0004ebd10, 0xc007fc3300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:33.552790  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0611 07:00:33.561743  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.561779  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.561946  109417 wrap.go:47] GET /healthz: (1.522028ms) 500
goroutine 11431 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00856e380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00856e380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038e6ae0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0024a8270, 0xc0040e0f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7ce00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0024a8270, 0xc007f7ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00238e240, 0xc007edc780, 0x74261e0, 0xc0024a8270, 0xc007f7ce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.570418  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.444995ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.591345  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.360283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.591657  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0611 07:00:33.610382  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.322013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.631076  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.076654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.631380  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0611 07:00:33.650680  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.588515ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.650849  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.651463  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.651681  109417 wrap.go:47] GET /healthz: (1.874049ms) 500
goroutine 11433 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00856e4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00856e4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038e7700, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0024a8350, 0xc003967540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0024a8350, 0xc008484000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0024a8350, 0xc007f7df00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0024a8350, 0xc007f7df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002c08060, 0xc007edc780, 0x74261e0, 0xc0024a8350, 0xc007f7df00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:33.661353  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.661391  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.661553  109417 wrap.go:47] GET /healthz: (1.34059ms) 500
goroutine 11420 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085608c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085608c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038d7560, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f15150, 0xc003967a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1600)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f15150, 0xc007ec1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000d13c20, 0xc007edc780, 0x74261e0, 0xc001f15150, 0xc007ec1600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.675146  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.150786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.675474  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0611 07:00:33.690519  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.494502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.712005  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.936854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.712359  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0611 07:00:33.731032  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.586012ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.751177  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.130621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.751459  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.751464  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0611 07:00:33.751485  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.751645  109417 wrap.go:47] GET /healthz: (2.01664ms) 500
goroutine 11439 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00856ed90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00856ed90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00382d900, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0024a8508, 0xc0023aa140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485600)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0024a8508, 0xc008485600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002c09020, 0xc007edc780, 0x74261e0, 0xc0024a8508, 0xc008485600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38252]
I0611 07:00:33.761234  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.761276  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.761427  109417 wrap.go:47] GET /healthz: (1.138299ms) 500
goroutine 11201 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00847b5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00847b5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004030a00, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002649348, 0xc002896000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002649348, 0xc006d04700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002649348, 0xc006d04600)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002649348, 0xc006d04600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0040da6c0, 0xc007edc780, 0x74261e0, 0xc002649348, 0xc006d04600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.770344  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.357269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.791249  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.138532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.791518  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0611 07:00:33.810571  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.530736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.831317  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.237884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.831619  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0611 07:00:33.850386  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.381604ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.850527  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.850545  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.850696  109417 wrap.go:47] GET /healthz: (1.077389ms) 500
goroutine 11465 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085617a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085617a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0037a2680, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f153e0, 0xc0023ab2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b800)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f153e0, 0xc00850b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002068300, 0xc007edc780, 0x74261e0, 0xc001f153e0, 0xc00850b800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:33.861454  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.861490  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.861653  109417 wrap.go:47] GET /healthz: (1.313245ms) 500
goroutine 11356 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00850c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00850c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003bb7400, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0046c22b8, 0xc004259cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1d900)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0046c22b8, 0xc006c1d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004447f20, 0xc007edc780, 0x74261e0, 0xc0046c22b8, 0xc006c1d900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.871192  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.237208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.871591  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0611 07:00:33.890530  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.424283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.911327  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.331352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.911568  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0611 07:00:33.930297  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.301426ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.952036  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.079723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:33.952661  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.952692  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.952714  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0611 07:00:33.952848  109417 wrap.go:47] GET /healthz: (1.419814ms) 500
goroutine 11484 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085af3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085af3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0013dc9a0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc009a70d20, 0xc0023ab7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661300)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc009a70d20, 0xc008661300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00235b500, 0xc007edc780, 0x74261e0, 0xc009a70d20, 0xc008661300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38252]
I0611 07:00:33.961930  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:33.961995  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:33.962167  109417 wrap.go:47] GET /healthz: (1.993017ms) 500
goroutine 11507 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00847b960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00847b960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004030f40, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0026493d8, 0xc002896c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d05000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d04f00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0026493d8, 0xc006d04f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0040dac00, 0xc007edc780, 0x74261e0, 0xc0026493d8, 0xc006d04f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.973195  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.385194ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.991439  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.485094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:33.991679  109417 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0611 07:00:34.010429  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.376022ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.015842  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.319581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.031295  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.310736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.031742  109417 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0611 07:00:34.050371  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.347655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.051306  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.051341  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.051507  109417 wrap.go:47] GET /healthz: (988.652µs) 500
goroutine 11524 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085b89a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085b89a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0020bcc20, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f155a0, 0xc0028972c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f155a0, 0xc0012f6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f155a0, 0xc008aeff00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f155a0, 0xc008aeff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004626300, 0xc007edc780, 0x74261e0, 0xc001f155a0, 0xc008aeff00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:34.052501  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.44661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.061754  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.061794  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.061969  109417 wrap.go:47] GET /healthz: (1.666112ms) 500
goroutine 11539 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085afe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085afe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002024840, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc009a70e48, 0xc0023abcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2b00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc009a70e48, 0xc0093f2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0045182a0, 0xc007edc780, 0x74261e0, 0xc009a70e48, 0xc0093f2b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.074908  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.248445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.075267  109417 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0611 07:00:34.090236  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.242646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.091900  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.233715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.111757  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.622847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.112038  109417 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0611 07:00:34.130451  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.451983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.132578  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.552297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.151201  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.151248  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.151419  109417 wrap.go:47] GET /healthz: (1.784288ms) 500
goroutine 11514 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085d67e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085d67e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001eeb940, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002649660, 0xc00407ca00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002649660, 0xc001389000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002649660, 0xc001388f00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002649660, 0xc001388f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0040dbc80, 0xc007edc780, 0x74261e0, 0xc002649660, 0xc001388f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:34.151699  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.689315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.152064  109417 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0611 07:00:34.161618  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.161652  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.161815  109417 wrap.go:47] GET /healthz: (1.555944ms) 500
goroutine 11409 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085c3f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085c3f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001d768c0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0071b8490, 0xc002429900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a700)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0071b8490, 0xc00245a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0047087e0, 0xc007edc780, 0x74261e0, 0xc0071b8490, 0xc00245a700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.170824  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.846941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.172751  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.464465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.193610  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.561052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.193927  109417 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0611 07:00:34.210757  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.745121ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.212572  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.286562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.231696  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.655335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.231978  109417 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0611 07:00:34.250773  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.250802  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.250962  109417 wrap.go:47] GET /healthz: (1.38616ms) 500
goroutine 11555 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085d72d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085d72d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00024dd80, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0026497b0, 0xc0028883c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632700)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0026497b0, 0xc002632700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004aaa180, 0xc007edc780, 0x74261e0, 0xc0026497b0, 0xc002632700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:34.251400  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.488593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.253174  109417 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.440903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.261429  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.261458  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.261597  109417 wrap.go:47] GET /healthz: (1.442487ms) 500
goroutine 11545 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085e5030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085e5030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001d6b040, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc009a71088, 0xc002897900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc009a71088, 0xc002564900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc009a71088, 0xc002564800)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc009a71088, 0xc002564800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004519a40, 0xc007edc780, 0x74261e0, 0xc009a71088, 0xc002564800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.271167  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.035616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.271590  109417 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0611 07:00:34.290310  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.257566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.292135  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.430325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.315824  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.761083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.316114  109417 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0611 07:00:34.330178  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.216919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.332084  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.39139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.350630  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.350664  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.350805  109417 wrap.go:47] GET /healthz: (1.168206ms) 500
goroutine 11557 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0085d7c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0085d7c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000b6daa0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0026499e8, 0xc00407d7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633400)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0026499e8, 0xc002633400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004aab200, 0xc007edc780, 0x74261e0, 0xc0026499e8, 0xc002633400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:34.351159  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.238168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.351417  109417 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0611 07:00:34.361349  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.361382  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.361545  109417 wrap.go:47] GET /healthz: (1.247906ms) 500
goroutine 11577 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086247e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086247e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0002f43e0, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc001f15a58, 0xc0012b3900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc001f15a58, 0xc0029be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc001f15a58, 0xc002567f00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc001f15a58, 0xc002567f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0030456e0, 0xc007edc780, 0x74261e0, 0xc001f15a58, 0xc002567f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.370303  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.283756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.371865  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.143582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.391444  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.35331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.391794  109417 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0611 07:00:34.410662  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.541657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.412983  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.495262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.431270  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.239113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.431579  109417 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0611 07:00:34.450427  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.400936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.450863  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.450888  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.451033  109417 wrap.go:47] GET /healthz: (1.432859ms) 500
goroutine 11561 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008632c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008632c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00121c900, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002649e30, 0xc0095ce140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eaa00)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002649e30, 0xc0029eaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0035eefc0, 0xc007edc780, 0x74261e0, 0xc002649e30, 0xc0029eaa00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:34.452531  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.288733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.461688  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.461721  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.461874  109417 wrap.go:47] GET /healthz: (1.604631ms) 500
goroutine 11500 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008617500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008617500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001392280, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002b68000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c800)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc0046c26d0, 0xc002a6c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0032ab200, 0xc007edc780, 0x74261e0, 0xc0046c26d0, 0xc002a6c800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.471538  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.48274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.471925  109417 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0611 07:00:34.490693  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.690555ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.499962  109417 wrap.go:47] GET /api/v1/namespaces/kube-system: (8.431549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.511022  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.096698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.511511  109417 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0611 07:00:34.532139  109417 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (3.104475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.533782  109417 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.152844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.550782  109417 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 07:00:34.550814  109417 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 07:00:34.550992  109417 wrap.go:47] GET /healthz: (1.330895ms) 500
goroutine 11563 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086335e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086335e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026ecf60, 0x1f4)
net/http.Error(0x7f1e2c3b8c60, 0xc002649fd8, 0xc0095ce780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
net/http.HandlerFunc.ServeHTTP(0xc0049e96c0, 0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc004012f00, 0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc0049d8750, 0xc005bdf1f0, 0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
net/http.HandlerFunc.ServeHTTP(0xc0075d5580, 0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
net/http.HandlerFunc.ServeHTTP(0xc004c6baa0, 0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
net/http.HandlerFunc.ServeHTTP(0xc0075d55c0, 0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eb900)
net/http.HandlerFunc.ServeHTTP(0xc007abeaa0, 0x7f1e2c3b8c60, 0xc002649fd8, 0xc0029eb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003bb1020, 0xc007edc780, 0x74261e0, 0xc002649fd8, 0xc0029eb900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:38234]
I0611 07:00:34.551809  109417 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.744956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.552061  109417 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0611 07:00:34.561300  109417 wrap.go:47] GET /healthz: (1.002769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.563425  109417 wrap.go:47] GET /api/v1/namespaces/default: (1.769432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.565880  109417 wrap.go:47] POST /api/v1/namespaces: (1.712814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.567284  109417 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (997.922µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.571162  109417 wrap.go:47] POST /api/v1/namespaces/default/services: (3.407953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.572384  109417 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (865.438µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.574325  109417 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.604841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.651145  109417 wrap.go:47] GET /healthz: (1.329615ms) 200 [Go-http-client/1.1 127.0.0.1:38252]
W0611 07:00:34.651964  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 07:00:34.652010  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 07:00:34.652024  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 07:00:34.652041  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 07:00:34.652057  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 07:00:34.652070  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 07:00:34.652085  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 07:00:34.652099  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 07:00:34.652113  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 07:00:34.652171  109417 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0611 07:00:34.652202  109417 factory.go:345] Creating scheduler from algorithm provider 'DefaultProvider'
I0611 07:00:34.652235  109417 factory.go:433] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0611 07:00:34.652782  109417 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.652813  109417 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.653170  109417 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.653195  109417 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.654062  109417 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (571.399µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.654065  109417 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (607.435µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:34.654302  109417 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.654330  109417 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.654595  109417 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.654615  109417 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.654774  109417 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.654811  109417 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.654880  109417 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.654897  109417 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.655097  109417 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.655107  109417 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.655146  109417 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.655163  109417 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.655404  109417 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.655417  109417 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.655434  109417 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.655451  109417 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0611 07:00:34.656274  109417 get.go:250] Starting watch for /api/v1/nodes, rv=22208 labels= fields= timeout=5m12s
I0611 07:00:34.656330  109417 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (394.179µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:34.657200  109417 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (413.388µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38464]
I0611 07:00:34.657274  109417 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (281.403µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38458]
I0611 07:00:34.657454  109417 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (419.222µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38452]
I0611 07:00:34.657485  109417 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=22208 labels= fields= timeout=6m20s
I0611 07:00:34.657786  109417 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=22216 labels= fields= timeout=8m14s
I0611 07:00:34.657919  109417 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=22221 labels= fields= timeout=7m3s
I0611 07:00:34.657963  109417 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (622.645µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38462]
I0611 07:00:34.657971  109417 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=22208 labels= fields= timeout=9m32s
I0611 07:00:34.658127  109417 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (1.266993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38460]
I0611 07:00:34.658600  109417 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (1.782556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38456]
I0611 07:00:34.658632  109417 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=22221 labels= fields= timeout=5m21s
I0611 07:00:34.658655  109417 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=22210 labels= fields= timeout=8m30s
I0611 07:00:34.659162  109417 get.go:250] Starting watch for /api/v1/pods, rv=22208 labels= fields= timeout=5m51s
I0611 07:00:34.659228  109417 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (1.031106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38454]
I0611 07:00:34.659911  109417 get.go:250] Starting watch for /api/v1/services, rv=22568 labels= fields= timeout=9m26s
I0611 07:00:34.660346  109417 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=22221 labels= fields= timeout=9m31s
I0611 07:00:34.752680  109417 shared_informer.go:176] caches populated
I0611 07:00:34.852874  109417 shared_informer.go:176] caches populated
I0611 07:00:34.953112  109417 shared_informer.go:176] caches populated
I0611 07:00:35.053299  109417 shared_informer.go:176] caches populated
I0611 07:00:35.153539  109417 shared_informer.go:176] caches populated
I0611 07:00:35.253798  109417 shared_informer.go:176] caches populated
I0611 07:00:35.354122  109417 shared_informer.go:176] caches populated
I0611 07:00:35.454305  109417 shared_informer.go:176] caches populated
I0611 07:00:35.558360  109417 shared_informer.go:176] caches populated
I0611 07:00:35.654756  109417 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 07:00:35.656133  109417 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 07:00:35.657881  109417 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 07:00:35.658535  109417 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 07:00:35.658574  109417 shared_informer.go:176] caches populated
I0611 07:00:35.659095  109417 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 07:00:35.659750  109417 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 07:00:35.758783  109417 shared_informer.go:176] caches populated
I0611 07:00:35.762332  109417 wrap.go:47] POST /api/v1/nodes: (2.845883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38482]
I0611 07:00:35.765524  109417 wrap.go:47] POST /api/v1/nodes: (2.511524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38482]
I0611 07:00:35.769591  109417 scheduling_queue.go:815] About to try and schedule pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.769634  109417 scheduler.go:456] Attempting to schedule pod: unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.769931  109417 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-0"
I0611 07:00:35.769964  109417 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0611 07:00:35.770062  109417 factory.go:727] Attempting to bind test-pod to test-node-0
I0611 07:00:35.773506  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods: (7.152437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38482]
I0611 07:00:35.774141  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod/binding: (3.327071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38484]
I0611 07:00:35.774349  109417 scheduler.go:593] pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod is bound successfully on node test-node-0, 2 nodes evaluated, 2 nodes were found feasible
I0611 07:00:35.777026  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/events: (2.341176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38484]
I0611 07:00:35.875975  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (1.710962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38484]
I0611 07:00:35.882574  109417 wrap.go:47] DELETE /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (6.061375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38484]
I0611 07:00:35.885406  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (1.267806ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38484]
I0611 07:00:35.887941  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods: (2.112295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38484]
I0611 07:00:35.888321  109417 scheduling_queue.go:815] About to try and schedule pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.888337  109417 scheduler.go:456] Attempting to schedule pod: unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.888568  109417 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-1"
I0611 07:00:35.888582  109417 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-1": all PVCs bound and nothing to do
E0611 07:00:35.888622  109417 framework.go:202] error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod
E0611 07:00:35.888638  109417 factory.go:678] Error scheduling unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod: error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod; retrying
I0611 07:00:35.888666  109417 factory.go:736] Updating pod condition for unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0611 07:00:35.894711  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/events: (5.24422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38488]
I0611 07:00:35.896382  109417 scheduling_queue.go:815] About to try and schedule pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.896404  109417 scheduler.go:456] Attempting to schedule pod: unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.896651  109417 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-0"
I0611 07:00:35.896666  109417 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-0": all PVCs bound and nothing to do
E0611 07:00:35.896720  109417 framework.go:202] error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod
E0611 07:00:35.896737  109417 factory.go:678] Error scheduling unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod: error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod; retrying
I0611 07:00:35.896769  109417 factory.go:736] Updating pod condition for unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0611 07:00:35.902348  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (2.978117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38490]
I0611 07:00:35.902914  109417 wrap.go:47] PATCH /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/events/test-pod.15a712c59ec3cf6d: (5.570926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38488]
I0611 07:00:35.907636  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (17.581823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38482]
I0611 07:00:35.909003  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (2.599294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38488]
I0611 07:00:35.909637  109417 scheduling_queue.go:815] About to try and schedule pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.909725  109417 scheduler.go:452] Skip schedule deleting pod: unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.911153  109417 wrap.go:47] DELETE /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (7.566173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38490]
I0611 07:00:35.911998  109417 wrap.go:47] PUT /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod/status: (21.60936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38484]
I0611 07:00:35.913363  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/events: (1.59725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38488]
I0611 07:00:35.914817  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (2.061018ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38484]
I0611 07:00:35.917667  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods: (2.174588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38488]
I0611 07:00:35.918049  109417 scheduling_queue.go:815] About to try and schedule pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.918066  109417 scheduler.go:456] Attempting to schedule pod: unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:35.918318  109417 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-1"
I0611 07:00:35.918336  109417 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-1": all PVCs bound and nothing to do
I0611 07:00:35.918381  109417 framework.go:198] rejected by prebind-plugin at prebind: reject pod test-pod
E0611 07:00:35.918396  109417 factory.go:678] Error scheduling unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod: rejected by prebind-plugin at prebind: reject pod test-pod; retrying
I0611 07:00:35.918440  109417 factory.go:736] Updating pod condition for unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod to (PodScheduled==False, Reason=Unschedulable)
I0611 07:00:35.922514  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (2.054229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38492]
I0611 07:00:35.922538  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/events: (3.005813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
I0611 07:00:35.922766  109417 wrap.go:47] PUT /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod/status: (3.713219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38482]
I0611 07:00:36.027807  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (2.216969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38492]
I0611 07:00:36.032203  109417 scheduling_queue.go:815] About to try and schedule pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:36.032275  109417 scheduler.go:452] Skip schedule deleting pod: unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:36.034158  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/events: (1.562159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
I0611 07:00:36.035110  109417 wrap.go:47] DELETE /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (6.531567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38492]
I0611 07:00:36.037646  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (993.491µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38492]
I0611 07:00:36.039684  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods: (1.68706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38492]
I0611 07:00:36.040121  109417 scheduling_queue.go:815] About to try and schedule pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:36.040140  109417 scheduler.go:456] Attempting to schedule pod: unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:36.040464  109417 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-0"
I0611 07:00:36.040491  109417 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-0": all PVCs bound and nothing to do
E0611 07:00:36.040542  109417 framework.go:202] error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod
E0611 07:00:36.040562  109417 factory.go:678] Error scheduling unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod: error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod; retrying
I0611 07:00:36.040591  109417 factory.go:736] Updating pod condition for unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0611 07:00:36.042839  109417 wrap.go:47] PUT /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod/status: (1.515292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
I0611 07:00:36.043128  109417 scheduling_queue.go:815] About to try and schedule pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:36.043149  109417 scheduler.go:456] Attempting to schedule pod: unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:36.043400  109417 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-1"
I0611 07:00:36.043421  109417 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod", node "test-node-1": all PVCs bound and nothing to do
E0611 07:00:36.043474  109417 framework.go:202] error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod
E0611 07:00:36.043491  109417 factory.go:678] Error scheduling unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod: error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod; retrying
I0611 07:00:36.043516  109417 factory.go:736] Updating pod condition for unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0611 07:00:36.043872  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (1.990267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38492]
I0611 07:00:36.044262  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/events: (2.276692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38496]
I0611 07:00:36.046635  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (2.716945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
E0611 07:00:36.046870  109417 factory.go:702] pod is already present in unschedulableQ
I0611 07:00:36.048072  109417 wrap.go:47] PATCH /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/events/test-pod.15a712c5a7d2057c: (3.145151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38496]
I0611 07:00:36.051490  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (1.166905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
I0611 07:00:36.054476  109417 scheduling_queue.go:815] About to try and schedule pod unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:36.054528  109417 scheduler.go:452] Skip schedule deleting pod: unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod
I0611 07:00:36.056528  109417 wrap.go:47] POST /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/events: (1.75228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38492]
I0611 07:00:36.056861  109417 wrap.go:47] DELETE /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (5.001948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
I0611 07:00:36.059674  109417 wrap.go:47] GET /api/v1/namespaces/unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/pods/test-pod: (1.040282ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
E0611 07:00:36.060302  109417 scheduling_queue.go:818] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0611 07:00:36.060480  109417 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=22221&timeout=9m31s&timeoutSeconds=571&watch=true: (1.400370562s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38252]
I0611 07:00:36.060492  109417 wrap.go:47] GET /api/v1/services?resourceVersion=22568&timeout=9m26s&timeoutSeconds=566&watch=true: (1.400841712s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38454]
I0611 07:00:36.060505  109417 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=22216&timeout=8m14s&timeoutSeconds=494&watch=true: (1.402897821s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38464]
I0611 07:00:36.060543  109417 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=22208&timeout=6m20s&timeoutSeconds=380&watch=true: (1.403375032s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38450]
I0611 07:00:36.060556  109417 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=22208&timeout=9m32s&timeoutSeconds=572&watch=true: (1.402788535s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38458]
I0611 07:00:36.060570  109417 wrap.go:47] GET /api/v1/nodes?resourceVersion=22208&timeout=5m12s&timeoutSeconds=312&watch=true: (1.404622754s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38234]
I0611 07:00:36.060615  109417 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=22221&timeout=7m3s&timeoutSeconds=423&watch=true: (1.402903215s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38452]
I0611 07:00:36.060613  109417 wrap.go:47] GET /api/v1/pods?resourceVersion=22208&timeout=5m51s&timeoutSeconds=351&watch=true: (1.401625963s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38456]
I0611 07:00:36.060628  109417 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=22221&timeout=5m21s&timeoutSeconds=321&watch=true: (1.40220131s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38460]
I0611 07:00:36.060668  109417 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=22210&timeout=8m30s&timeoutSeconds=510&watch=true: (1.402236693s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38462]
I0611 07:00:36.068412  109417 wrap.go:47] DELETE /api/v1/nodes: (7.736892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
I0611 07:00:36.068624  109417 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0611 07:00:36.069944  109417 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.067792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
I0611 07:00:36.071900  109417 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.502983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38494]
framework_test.go:466: test #1: Expected the unreserve plugin to be called 2 times, was called 1 times.
framework_test.go:474: test #2: Expected the unreserve plugin to be called 1 times, was called 2 times.
				from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190611-065459.xml

Find unreserve-plugine21cd8d4-10f9-468e-a96b-b926240eda1a/test-pod mentions in log files | View test history on testgrid


Show 1692 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 305 lines ...
W0611 06:49:04.977] I0611 06:49:04.977309   48115 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0611 06:49:04.978] I0611 06:49:04.978034   48115 server.go:560] external host was not specified, using 172.17.0.2
W0611 06:49:04.978] W0611 06:49:04.978073   48115 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0611 06:49:04.979] I0611 06:49:04.979106   48115 server.go:147] Version: v1.16.0-alpha.0.917+8de1569ddae62e
W0611 06:49:05.566] I0611 06:49:05.565595   48115 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0611 06:49:05.566] I0611 06:49:05.565625   48115 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0611 06:49:05.567] E0611 06:49:05.566080   48115 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.567] E0611 06:49:05.566333   48115 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.568] E0611 06:49:05.566374   48115 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.568] E0611 06:49:05.566399   48115 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.568] E0611 06:49:05.566426   48115 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.568] E0611 06:49:05.566475   48115 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.569] E0611 06:49:05.566499   48115 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.569] E0611 06:49:05.566525   48115 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.569] E0611 06:49:05.566587   48115 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.569] E0611 06:49:05.566627   48115 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.570] E0611 06:49:05.566665   48115 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.570] E0611 06:49:05.566691   48115 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:05.570] I0611 06:49:05.566724   48115 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0611 06:49:05.571] I0611 06:49:05.566735   48115 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0611 06:49:05.571] I0611 06:49:05.568445   48115 client.go:354] parsed scheme: ""
W0611 06:49:05.571] I0611 06:49:05.568463   48115 client.go:354] scheme "" not registered, fallback to default scheme
W0611 06:49:05.571] I0611 06:49:05.568521   48115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0611 06:49:05.571] I0611 06:49:05.568610   48115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0611 06:49:06.184] W0611 06:49:06.183687   48115 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0611 06:49:06.564] I0611 06:49:06.563947   48115 client.go:354] parsed scheme: ""
W0611 06:49:06.565] I0611 06:49:06.564352   48115 client.go:354] scheme "" not registered, fallback to default scheme
W0611 06:49:06.565] I0611 06:49:06.564476   48115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0611 06:49:06.565] I0611 06:49:06.564570   48115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0611 06:49:06.566] I0611 06:49:06.566195   48115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0611 06:49:07.333] E0611 06:49:07.332496   48115 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.334] E0611 06:49:07.332584   48115 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.334] E0611 06:49:07.332614   48115 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.334] E0611 06:49:07.332670   48115 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.334] E0611 06:49:07.332730   48115 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.335] E0611 06:49:07.332758   48115 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.335] E0611 06:49:07.332789   48115 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.335] E0611 06:49:07.332814   48115 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.335] E0611 06:49:07.332897   48115 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.335] E0611 06:49:07.332962   48115 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.336] E0611 06:49:07.332993   48115 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.336] E0611 06:49:07.333017   48115 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0611 06:49:07.336] I0611 06:49:07.333047   48115 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0611 06:49:07.336] I0611 06:49:07.333054   48115 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0611 06:49:07.337] I0611 06:49:07.334634   48115 client.go:354] parsed scheme: ""
W0611 06:49:07.337] I0611 06:49:07.334665   48115 client.go:354] scheme "" not registered, fallback to default scheme
W0611 06:49:07.337] I0611 06:49:07.334721   48115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0611 06:49:07.337] I0611 06:49:07.335102   48115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 60 lines ...
W0611 06:49:46.258] I0611 06:49:46.257804   51471 controller_utils.go:1029] Waiting for caches to sync for disruption controller
W0611 06:49:46.258] I0611 06:49:46.256606   51471 replica_set.go:182] Starting replicationcontroller controller
W0611 06:49:46.258] I0611 06:49:46.257901   51471 controller_utils.go:1029] Waiting for caches to sync for ReplicationController controller
W0611 06:49:46.258] I0611 06:49:46.257947   51471 controllermanager.go:532] Started "csrcleaner"
W0611 06:49:46.258] W0611 06:49:46.257969   51471 controllermanager.go:511] "tokencleaner" is disabled
W0611 06:49:46.259] I0611 06:49:46.258161   51471 cleaner.go:81] Starting CSR cleaner controller
W0611 06:49:46.259] E0611 06:49:46.258467   51471 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0611 06:49:46.261] W0611 06:49:46.259201   51471 controllermanager.go:524] Skipping "service"
W0611 06:49:46.262] W0611 06:49:46.259289   51471 controllermanager.go:524] Skipping "root-ca-cert-publisher"
I0611 06:49:46.363] node/127.0.0.1 created
I0611 06:49:46.363] +++ [0611 06:49:46] Checking kubectl version
I0611 06:49:46.364] Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.0-alpha.0.917+8de1569ddae62e", GitCommit:"8de1569ddae62e8fab559fe6bd210a5d6100a277", GitTreeState:"clean", BuildDate:"2019-06-11T06:47:46Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
I0611 06:49:46.364] Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.0-alpha.0.917+8de1569ddae62e", GitCommit:"8de1569ddae62e8fab559fe6bd210a5d6100a277", GitTreeState:"clean", BuildDate:"2019-06-11T06:48:07Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 104 lines ...
W0611 06:49:46.972] I0611 06:49:46.905017   51471 node_lifecycle_controller.go:388] Controller will reconcile labels.
W0611 06:49:46.972] I0611 06:49:46.905089   51471 node_lifecycle_controller.go:401] Controller will taint node by condition.
W0611 06:49:46.973] I0611 06:49:46.905142   51471 controllermanager.go:532] Started "nodelifecycle"
W0611 06:49:46.973] I0611 06:49:46.905176   51471 node_lifecycle_controller.go:425] Starting node controller
W0611 06:49:46.973] I0611 06:49:46.905225   51471 controller_utils.go:1029] Waiting for caches to sync for taint controller
W0611 06:49:46.973] I0611 06:49:46.905619   51471 node_lifecycle_controller.go:77] Sending events to api server
W0611 06:49:46.973] E0611 06:49:46.905811   51471 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0611 06:49:46.974] W0611 06:49:46.905849   51471 controllermanager.go:524] Skipping "cloud-node-lifecycle"
W0611 06:49:46.974] I0611 06:49:46.906905   51471 controllermanager.go:532] Started "persistentvolume-binder"
W0611 06:49:46.974] W0611 06:49:46.906973   51471 controllermanager.go:524] Skipping "ttl-after-finished"
W0611 06:49:46.974] I0611 06:49:46.907904   51471 pv_controller_base.go:282] Starting persistent volume controller
W0611 06:49:46.974] I0611 06:49:46.907937   51471 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
W0611 06:49:46.984] I0611 06:49:46.981299   51471 controller_utils.go:1036] Caches are synced for certificate controller
W0611 06:49:46.986] I0611 06:49:46.985183   51471 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0611 06:49:46.986] I0611 06:49:46.985453   51471 controller_utils.go:1036] Caches are synced for expand controller
W0611 06:49:46.998] I0611 06:49:46.998108   51471 controller_utils.go:1036] Caches are synced for namespace controller
W0611 06:49:47.000] I0611 06:49:47.000173   51471 controller_utils.go:1036] Caches are synced for PV protection controller
W0611 06:49:47.003] E0611 06:49:47.002479   51471 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0611 06:49:47.103] {
I0611 06:49:47.104]   "major": "1",
I0611 06:49:47.104]   "minor": "16+",
I0611 06:49:47.104]   "gitVersion": "v1.16.0-alpha.0.917+8de1569ddae62e",
I0611 06:49:47.104]   "gitCommit": "8de1569ddae62e8fab559fe6bd210a5d6100a277",
I0611 06:49:47.104]   "gitTreeState": "clean",
... skipping 4 lines ...
I0611 06:49:47.163] }+++ [0611 06:49:47] Testing kubectl version: check client only output matches expected output
I0611 06:49:47.314] Successful: the flag '--client' shows correct client info
I0611 06:49:47.322] (BSuccessful: the flag '--client' correctly has no server version info
I0611 06:49:47.325] (B+++ [0611 06:49:47] Testing kubectl version: verify json output
W0611 06:49:47.426] I0611 06:49:47.402865   51471 controller_utils.go:1036] Caches are synced for service account controller
W0611 06:49:47.427] I0611 06:49:47.405811   48115 controller.go:606] quota admission added evaluator for: serviceaccounts
W0611 06:49:47.427] W0611 06:49:47.412297   51471 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0611 06:49:47.483] I0611 06:49:47.482733   51471 controller_utils.go:1036] Caches are synced for TTL controller
I0611 06:49:47.583] Successful: --output json has correct client info
I0611 06:49:47.584] (BSuccessful: --output json has correct server info
I0611 06:49:47.584] (B+++ [0611 06:49:47] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0611 06:49:47.635] Successful: --client --output json has correct client info
I0611 06:49:47.640] (BSuccessful: --client --output json has no server info
... skipping 75 lines ...
I0611 06:49:50.518] +++ working dir: /go/src/k8s.io/kubernetes
I0611 06:49:50.520] +++ command: run_RESTMapper_evaluation_tests
I0611 06:49:50.530] +++ [0611 06:49:50] Creating namespace namespace-1560235790-25932
I0611 06:49:50.597] namespace/namespace-1560235790-25932 created
I0611 06:49:50.667] Context "test" modified.
I0611 06:49:50.673] +++ [0611 06:49:50] Testing RESTMapper
I0611 06:49:50.770] +++ [0611 06:49:50] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0611 06:49:50.782] +++ exit code: 0
I0611 06:49:50.885] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0611 06:49:50.886] bindings                                                                      true         Binding
I0611 06:49:50.886] componentstatuses                 cs                                          false        ComponentStatus
I0611 06:49:50.887] configmaps                        cm                                          true         ConfigMap
I0611 06:49:50.887] endpoints                         ep                                          true         Endpoints
... skipping 646 lines ...
I0611 06:50:08.567] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0611 06:50:08.659] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0611 06:50:08.732] (Bnamespace/test-kubectl-describe-pod created
I0611 06:50:08.824] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0611 06:50:08.912] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:50:08.988] (Bsecret/test-secret created
W0611 06:50:09.089] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0611 06:50:09.089] error: setting 'all' parameter but found a non empty selector. 
W0611 06:50:09.089] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0611 06:50:09.190] core.sh:223: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-secret
I0611 06:50:09.190] (Bcore.sh:224: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.type}}: test-type
I0611 06:50:09.268] (Bcore.sh:229: Successful get configmaps --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq $id_field \"test-configmap\" }}found{{end}}{{end}}:: :
I0611 06:50:09.336] (Bconfigmap/test-configmap created
I0611 06:50:09.417] core.sh:235: Successful get configmap/test-configmap --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-configmap
... skipping 5 lines ...
I0611 06:50:09.888] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0611 06:50:09.963] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0611 06:50:10.053] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0611 06:50:10.208] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:50:10.412] (Bpod/env-test-pod created
W0611 06:50:10.513] I0611 06:50:09.481924   48115 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0611 06:50:10.514] error: min-available and max-unavailable cannot be both specified
I0611 06:50:10.614] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0611 06:50:10.615] Name:         env-test-pod
I0611 06:50:10.616] Namespace:    test-kubectl-describe-pod
I0611 06:50:10.616] Priority:     0
I0611 06:50:10.616] Node:         <none>
I0611 06:50:10.616] Labels:       <none>
... skipping 142 lines ...
I0611 06:50:22.697] replicationcontroller "modified" deleted
W0611 06:50:22.798] I0611 06:50:22.343172   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235817-18651", Name:"modified", UID:"1a69a180-54a2-41b8-9c35-2940b19fce57", APIVersion:"v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-cc8hx
I0611 06:50:22.955] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:50:23.117] (Bpod/valid-pod created
I0611 06:50:23.208] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0611 06:50:23.353] (BSuccessful
I0611 06:50:23.353] message:Error from server: cannot restore map from string
I0611 06:50:23.353] has:cannot restore map from string
I0611 06:50:23.434] Successful
I0611 06:50:23.434] message:pod/valid-pod patched (no change)
I0611 06:50:23.434] has:patched (no change)
I0611 06:50:23.513] pod/valid-pod patched
I0611 06:50:23.602] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0611 06:50:24.081] (Bpod/valid-pod patched
I0611 06:50:24.165] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0611 06:50:24.238] (Bpod/valid-pod patched
I0611 06:50:24.325] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0611 06:50:24.492] (Bpod/valid-pod patched
I0611 06:50:24.586] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0611 06:50:24.742] (B+++ [0611 06:50:24] "kubectl patch with resourceVersion 497" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0611 06:50:24.843] E0611 06:50:23.346165   48115 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0611 06:50:24.994] pod "valid-pod" deleted
I0611 06:50:25.010] pod/valid-pod replaced
I0611 06:50:25.101] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0611 06:50:25.265] (BSuccessful
I0611 06:50:25.265] message:error: --grace-period must have --force specified
I0611 06:50:25.265] has:\-\-grace-period must have \-\-force specified
I0611 06:50:25.417] Successful
I0611 06:50:25.418] message:error: --timeout must have --force specified
I0611 06:50:25.418] has:\-\-timeout must have \-\-force specified
I0611 06:50:25.576] node/node-v1-test created
W0611 06:50:25.677] W0611 06:50:25.576468   51471 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0611 06:50:25.778] node/node-v1-test replaced
I0611 06:50:25.857] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0611 06:50:25.949] (Bnode "node-v1-test" deleted
I0611 06:50:26.051] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0611 06:50:26.339] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0611 06:50:27.380] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 57 lines ...
I0611 06:50:31.453] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:50:31.611] (Bpod/test-pod created
W0611 06:50:31.711] Edit cancelled, no changes made.
W0611 06:50:31.712] Edit cancelled, no changes made.
W0611 06:50:31.712] Edit cancelled, no changes made.
W0611 06:50:31.712] Edit cancelled, no changes made.
W0611 06:50:31.712] error: 'name' already has a value (valid-pod), and --overwrite is false
W0611 06:50:31.712] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0611 06:50:31.713] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0611 06:50:31.813] pod "test-pod" deleted
I0611 06:50:31.813] +++ [0611 06:50:31] Creating namespace namespace-1560235831-301
I0611 06:50:31.857] namespace/namespace-1560235831-301 created
I0611 06:50:31.937] Context "test" modified.
... skipping 41 lines ...
I0611 06:50:34.912] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0611 06:50:34.915] +++ working dir: /go/src/k8s.io/kubernetes
I0611 06:50:34.916] +++ command: run_kubectl_create_error_tests
I0611 06:50:34.927] +++ [0611 06:50:34] Creating namespace namespace-1560235834-17306
I0611 06:50:34.995] namespace/namespace-1560235834-17306 created
I0611 06:50:35.061] Context "test" modified.
I0611 06:50:35.067] +++ [0611 06:50:35] Testing kubectl create with error
W0611 06:50:35.168] Error: must specify one of -f and -k
W0611 06:50:35.168] 
W0611 06:50:35.168] Create a resource from a file or from stdin.
W0611 06:50:35.168] 
W0611 06:50:35.169]  JSON and YAML formats are accepted.
W0611 06:50:35.169] 
W0611 06:50:35.169] Examples:
... skipping 41 lines ...
W0611 06:50:35.176] 
W0611 06:50:35.176] Usage:
W0611 06:50:35.176]   kubectl create -f FILENAME [options]
W0611 06:50:35.176] 
W0611 06:50:35.176] Use "kubectl <command> --help" for more information about a given command.
W0611 06:50:35.176] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0611 06:50:35.305] +++ [0611 06:50:35] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0611 06:50:35.406] kubectl convert is DEPRECATED and will be removed in a future version.
W0611 06:50:35.406] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0611 06:50:35.507] +++ exit code: 0
I0611 06:50:35.507] Recording: run_kubectl_apply_tests
I0611 06:50:35.507] Running command: run_kubectl_apply_tests
I0611 06:50:35.512] 
... skipping 19 lines ...
W0611 06:50:37.680] I0611 06:50:37.679861   48115 client.go:354] parsed scheme: ""
W0611 06:50:37.680] I0611 06:50:37.679896   48115 client.go:354] scheme "" not registered, fallback to default scheme
W0611 06:50:37.680] I0611 06:50:37.679939   48115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0611 06:50:37.681] I0611 06:50:37.680006   48115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0611 06:50:37.681] I0611 06:50:37.680455   48115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0611 06:50:37.683] I0611 06:50:37.683123   48115 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0611 06:50:37.769] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0611 06:50:37.869] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0611 06:50:37.870] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0611 06:50:37.888] +++ exit code: 0
I0611 06:50:37.916] Recording: run_kubectl_run_tests
I0611 06:50:37.916] Running command: run_kubectl_run_tests
I0611 06:50:37.933] 
... skipping 91 lines ...
I0611 06:50:40.269] Context "test" modified.
I0611 06:50:40.275] +++ [0611 06:50:40] Testing kubectl create filter
I0611 06:50:40.355] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:50:40.508] (Bpod/selector-test-pod created
I0611 06:50:40.595] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0611 06:50:40.671] (BSuccessful
I0611 06:50:40.671] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0611 06:50:40.672] has:pods "selector-test-pod-dont-apply" not found
I0611 06:50:40.743] pod "selector-test-pod" deleted
I0611 06:50:40.759] +++ exit code: 0
I0611 06:50:40.788] Recording: run_kubectl_apply_deployments_tests
I0611 06:50:40.789] Running command: run_kubectl_apply_deployments_tests
I0611 06:50:40.806] 
... skipping 32 lines ...
W0611 06:50:42.680] I0611 06:50:41.417416   48115 controller.go:606] quota admission added evaluator for: deployments.extensions
W0611 06:50:42.680] I0611 06:50:41.430385   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235840-30415", Name:"my-depl", UID:"55c3a91f-d31d-4704-9b88-7da3ba270c3e", APIVersion:"apps/v1", ResourceVersion:"550", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-6c96fb7d98 to 1
W0611 06:50:42.681] I0611 06:50:41.438039   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235840-30415", Name:"my-depl-6c96fb7d98", UID:"3e1832be-242f-44e9-ae9e-1eae02576c1a", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-6c96fb7d98-kcr7m
W0611 06:50:42.681] I0611 06:50:41.942707   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235840-30415", Name:"my-depl", UID:"55c3a91f-d31d-4704-9b88-7da3ba270c3e", APIVersion:"apps/v1", ResourceVersion:"560", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-58dfc757b8 to 1
W0611 06:50:42.681] I0611 06:50:41.947517   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235840-30415", Name:"my-depl-58dfc757b8", UID:"3bf5a748-a9b9-4b59-b447-326bfa1268e9", APIVersion:"apps/v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-58dfc757b8-5d4d8
W0611 06:50:42.681] I0611 06:50:42.538298   48115 controller.go:606] quota admission added evaluator for: replicasets.extensions
W0611 06:50:42.682] E0611 06:50:42.578945   51471 replica_set.go:450] Sync "namespace-1560235840-30415/my-depl-58dfc757b8" failed with replicasets.apps "my-depl-58dfc757b8" not found
W0611 06:50:42.682] E0611 06:50:42.584018   51471 replica_set.go:450] Sync "namespace-1560235840-30415/my-depl-6c96fb7d98" failed with replicasets.apps "my-depl-6c96fb7d98" not found
I0611 06:50:42.782] apps.sh:141: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:50:42.783] (Bapps.sh:142: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:50:42.855] (Bapps.sh:143: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:50:42.938] (Bapps.sh:147: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:50:43.098] (Bdeployment.extensions/nginx created
W0611 06:50:43.199] I0611 06:50:43.104983   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235840-30415", Name:"nginx", UID:"c2b21f64-fe3e-47be-924c-79d17b837316", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-54d5cbd75f to 3
W0611 06:50:43.200] I0611 06:50:43.109466   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235840-30415", Name:"nginx-54d5cbd75f", UID:"f6ac0f1f-6e0b-4d6e-9bfe-6bb60a287056", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-54d5cbd75f-5lv7d
W0611 06:50:43.200] I0611 06:50:43.114660   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235840-30415", Name:"nginx-54d5cbd75f", UID:"f6ac0f1f-6e0b-4d6e-9bfe-6bb60a287056", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-54d5cbd75f-p2b8b
W0611 06:50:43.201] I0611 06:50:43.115826   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235840-30415", Name:"nginx-54d5cbd75f", UID:"f6ac0f1f-6e0b-4d6e-9bfe-6bb60a287056", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-54d5cbd75f-6vmb7
I0611 06:50:43.301] apps.sh:151: Successful get deployment nginx {{.metadata.name}}: nginx
I0611 06:50:47.474] (BSuccessful
I0611 06:50:47.474] message:Error from server (Conflict): error when applying patch:
I0611 06:50:47.475] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1560235840-30415\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0611 06:50:47.475] to:
I0611 06:50:47.475] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0611 06:50:47.475] Name: "nginx", Namespace: "namespace-1560235840-30415"
I0611 06:50:47.478] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1560235840-30415\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-06-11T06:50:43Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-06-11T06:50:43Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-06-11T06:50:43Z"]] "name":"nginx" "namespace":"namespace-1560235840-30415" "resourceVersion":"603" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1560235840-30415/deployments/nginx" "uid":"c2b21f64-fe3e-47be-924c-79d17b837316"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-06-11T06:50:43Z" "lastUpdateTime":"2019-06-11T06:50:43Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0611 06:50:47.479] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0611 06:50:47.479] has:Error from server (Conflict)
W0611 06:50:49.378] I0611 06:50:49.378389   51471 horizontal.go:340] Horizontal Pod Autoscaler frontend has been deleted in namespace-1560235832-15785
W0611 06:50:52.015] E0611 06:50:52.014760   51471 replica_set.go:450] Sync "namespace-1560235840-30415/nginx-54d5cbd75f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-54d5cbd75f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1560235840-30415/nginx-54d5cbd75f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f6ac0f1f-6e0b-4d6e-9bfe-6bb60a287056, UID in object meta: 
I0611 06:50:52.756] deployment.extensions/nginx configured
W0611 06:50:52.857] I0611 06:50:52.762438   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235840-30415", Name:"nginx", UID:"7c483de7-2392-4992-936d-684c822a5612", APIVersion:"apps/v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5b7c5d5d to 3
W0611 06:50:52.857] I0611 06:50:52.768386   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235840-30415", Name:"nginx-5b7c5d5d", UID:"e20fe492-2af7-4ccc-a7e0-4cc67d33cf12", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5b7c5d5d-s6ljn
W0611 06:50:52.857] I0611 06:50:52.773095   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235840-30415", Name:"nginx-5b7c5d5d", UID:"e20fe492-2af7-4ccc-a7e0-4cc67d33cf12", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5b7c5d5d-zzkt6
W0611 06:50:52.858] I0611 06:50:52.774089   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235840-30415", Name:"nginx-5b7c5d5d", UID:"e20fe492-2af7-4ccc-a7e0-4cc67d33cf12", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5b7c5d5d-frttg
I0611 06:50:52.958] Successful
... skipping 168 lines ...
I0611 06:51:00.382] +++ [0611 06:51:00] Creating namespace namespace-1560235860-23927
I0611 06:51:00.454] namespace/namespace-1560235860-23927 created
I0611 06:51:00.533] Context "test" modified.
I0611 06:51:00.539] +++ [0611 06:51:00] Testing kubectl get
I0611 06:51:00.627] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:00.723] (BSuccessful
I0611 06:51:00.723] message:Error from server (NotFound): pods "abc" not found
I0611 06:51:00.724] has:pods "abc" not found
I0611 06:51:00.830] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:00.922] (BSuccessful
I0611 06:51:00.923] message:Error from server (NotFound): pods "abc" not found
I0611 06:51:00.923] has:pods "abc" not found
I0611 06:51:01.022] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:01.121] (BSuccessful
I0611 06:51:01.121] message:{
I0611 06:51:01.122]     "apiVersion": "v1",
I0611 06:51:01.122]     "items": [],
... skipping 23 lines ...
I0611 06:51:01.478] has not:No resources found
I0611 06:51:01.574] Successful
I0611 06:51:01.574] message:NAME
I0611 06:51:01.575] has not:No resources found
I0611 06:51:01.664] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:01.761] (BSuccessful
I0611 06:51:01.761] message:error: the server doesn't have a resource type "foobar"
I0611 06:51:01.761] has not:No resources found
I0611 06:51:01.841] Successful
I0611 06:51:01.841] message:No resources found.
I0611 06:51:01.841] has:No resources found
I0611 06:51:01.921] Successful
I0611 06:51:01.922] message:
I0611 06:51:01.922] has not:No resources found
I0611 06:51:02.002] Successful
I0611 06:51:02.002] message:No resources found.
I0611 06:51:02.003] has:No resources found
I0611 06:51:02.085] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:02.162] (BSuccessful
I0611 06:51:02.163] message:Error from server (NotFound): pods "abc" not found
I0611 06:51:02.163] has:pods "abc" not found
I0611 06:51:02.165] FAIL!
I0611 06:51:02.165] message:Error from server (NotFound): pods "abc" not found
I0611 06:51:02.165] has not:List
I0611 06:51:02.166] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0611 06:51:02.272] Successful
I0611 06:51:02.272] message:I0611 06:51:02.225864   62109 loader.go:359] Config loaded from file:  /tmp/tmp.32XK0AHAAO/.kube/config
I0611 06:51:02.273] I0611 06:51:02.227601   62109 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0611 06:51:02.273] I0611 06:51:02.247714   62109 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 888 lines ...
I0611 06:51:07.910] Successful
I0611 06:51:07.911] message:NAME    DATA   AGE
I0611 06:51:07.911] one     0      0s
I0611 06:51:07.911] three   0      0s
I0611 06:51:07.911] two     0      0s
I0611 06:51:07.911] STATUS    REASON          MESSAGE
I0611 06:51:07.911] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0611 06:51:07.911] has not:watch is only supported on individual resources
I0611 06:51:09.006] Successful
I0611 06:51:09.007] message:STATUS    REASON          MESSAGE
I0611 06:51:09.007] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0611 06:51:09.007] has not:watch is only supported on individual resources
I0611 06:51:09.011] +++ [0611 06:51:09] Creating namespace namespace-1560235869-20540
I0611 06:51:09.085] namespace/namespace-1560235869-20540 created
I0611 06:51:09.155] Context "test" modified.
I0611 06:51:09.239] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:09.389] (Bpod/valid-pod created
... skipping 104 lines ...
I0611 06:51:09.485] }
I0611 06:51:09.575] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0611 06:51:09.816] (B<no value>Successful
I0611 06:51:09.816] message:valid-pod:
I0611 06:51:09.816] has:valid-pod:
I0611 06:51:09.892] Successful
I0611 06:51:09.892] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0611 06:51:09.893] 	template was:
I0611 06:51:09.893] 		{.missing}
I0611 06:51:09.893] 	object given to jsonpath engine was:
I0611 06:51:09.894] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-06-11T06:51:09Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-06-11T06:51:09Z"}}, "name":"valid-pod", "namespace":"namespace-1560235869-20540", "resourceVersion":"702", "selfLink":"/api/v1/namespaces/namespace-1560235869-20540/pods/valid-pod", "uid":"1a7c77b7-b635-471c-bf4e-e3d6064bc22e"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0611 06:51:09.894] has:missing is not found
I0611 06:51:09.968] Successful
I0611 06:51:09.969] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0611 06:51:09.969] 	template was:
I0611 06:51:09.969] 		{{.missing}}
I0611 06:51:09.969] 	raw data was:
I0611 06:51:09.970] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-06-11T06:51:09Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-06-11T06:51:09Z"}],"name":"valid-pod","namespace":"namespace-1560235869-20540","resourceVersion":"702","selfLink":"/api/v1/namespaces/namespace-1560235869-20540/pods/valid-pod","uid":"1a7c77b7-b635-471c-bf4e-e3d6064bc22e"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0611 06:51:09.970] 	object given to template engine was:
I0611 06:51:09.971] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-06-11T06:51:09Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-06-11T06:51:09Z]] name:valid-pod namespace:namespace-1560235869-20540 resourceVersion:702 selfLink:/api/v1/namespaces/namespace-1560235869-20540/pods/valid-pod uid:1a7c77b7-b635-471c-bf4e-e3d6064bc22e] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0611 06:51:09.971] has:map has no entry for key "missing"
W0611 06:51:10.072] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0611 06:51:11.047] Successful
I0611 06:51:11.047] message:NAME        READY   STATUS    RESTARTS   AGE
I0611 06:51:11.047] valid-pod   0/1     Pending   0          1s
I0611 06:51:11.047] STATUS      REASON          MESSAGE
I0611 06:51:11.048] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0611 06:51:11.048] has:STATUS
I0611 06:51:11.048] Successful
I0611 06:51:11.049] message:NAME        READY   STATUS    RESTARTS   AGE
I0611 06:51:11.049] valid-pod   0/1     Pending   0          1s
I0611 06:51:11.049] STATUS      REASON          MESSAGE
I0611 06:51:11.049] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0611 06:51:11.049] has:valid-pod
I0611 06:51:12.127] Successful
I0611 06:51:12.128] message:pod/valid-pod
I0611 06:51:12.128] has not:STATUS
I0611 06:51:12.130] Successful
I0611 06:51:12.130] message:pod/valid-pod
... skipping 142 lines ...
I0611 06:51:13.228]   terminationGracePeriodSeconds: 30
I0611 06:51:13.228] status:
I0611 06:51:13.228]   phase: Pending
I0611 06:51:13.228]   qosClass: Guaranteed
I0611 06:51:13.228] has:name: valid-pod
I0611 06:51:13.300] Successful
I0611 06:51:13.301] message:Error from server (NotFound): pods "invalid-pod" not found
I0611 06:51:13.301] has:"invalid-pod" not found
I0611 06:51:13.377] pod "valid-pod" deleted
I0611 06:51:13.471] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:13.657] (Bpod/redis-master created
I0611 06:51:13.662] pod/valid-pod created
I0611 06:51:13.756] Successful
... skipping 283 lines ...
I0611 06:51:19.354] +++ command: run_kubectl_exec_pod_tests
I0611 06:51:19.365] +++ [0611 06:51:19] Creating namespace namespace-1560235879-17970
I0611 06:51:19.436] namespace/namespace-1560235879-17970 created
I0611 06:51:19.502] Context "test" modified.
I0611 06:51:19.507] +++ [0611 06:51:19] Testing kubectl exec POD COMMAND
I0611 06:51:19.581] Successful
I0611 06:51:19.582] message:Error from server (NotFound): pods "abc" not found
I0611 06:51:19.582] has:pods "abc" not found
W0611 06:51:19.682] I0611 06:51:18.918496   51471 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for foos.company.com
W0611 06:51:19.683] I0611 06:51:18.918609   51471 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0611 06:51:19.683] E0611 06:51:18.919723   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:51:19.789] pod/test-pod created
I0611 06:51:19.885] Successful
I0611 06:51:19.886] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0611 06:51:19.886] has not:pods "test-pod" not found
I0611 06:51:19.886] Successful
I0611 06:51:19.887] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0611 06:51:19.887] has not:pod or type/name must be specified
I0611 06:51:19.960] pod "test-pod" deleted
I0611 06:51:19.979] +++ exit code: 0
I0611 06:51:20.011] Recording: run_kubectl_exec_resource_name_tests
I0611 06:51:20.012] Running command: run_kubectl_exec_resource_name_tests
I0611 06:51:20.031] 
I0611 06:51:20.033] +++ Running case: test-cmd.run_kubectl_exec_resource_name_tests 
I0611 06:51:20.035] +++ working dir: /go/src/k8s.io/kubernetes
I0611 06:51:20.037] +++ command: run_kubectl_exec_resource_name_tests
I0611 06:51:20.048] +++ [0611 06:51:20] Creating namespace namespace-1560235880-4437
I0611 06:51:20.116] namespace/namespace-1560235880-4437 created
I0611 06:51:20.182] Context "test" modified.
I0611 06:51:20.188] +++ [0611 06:51:20] Testing kubectl exec TYPE/NAME COMMAND
W0611 06:51:20.289] E0611 06:51:19.921103   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:51:20.389] Successful
I0611 06:51:20.390] message:error: the server doesn't have a resource type "foo"
I0611 06:51:20.390] has:error:
I0611 06:51:20.396] Successful
I0611 06:51:20.396] message:Error from server (NotFound): deployments.extensions "bar" not found
I0611 06:51:20.396] has:"bar" not found
I0611 06:51:20.563] pod/test-pod created
I0611 06:51:20.740] replicaset.apps/frontend created
W0611 06:51:20.841] I0611 06:51:20.746105   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235880-4437", Name:"frontend", UID:"c68e6680-2454-460c-b052-941689be30e2", APIVersion:"apps/v1", ResourceVersion:"817", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mp78j
W0611 06:51:20.841] I0611 06:51:20.750772   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235880-4437", Name:"frontend", UID:"c68e6680-2454-460c-b052-941689be30e2", APIVersion:"apps/v1", ResourceVersion:"817", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rmhn2
W0611 06:51:20.841] I0611 06:51:20.751721   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235880-4437", Name:"frontend", UID:"c68e6680-2454-460c-b052-941689be30e2", APIVersion:"apps/v1", ResourceVersion:"817", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6vmkw
W0611 06:51:20.923] E0611 06:51:20.922756   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:51:21.023] configmap/test-set-env-config created
I0611 06:51:21.024] Successful
I0611 06:51:21.024] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0611 06:51:21.024] has:not implemented
I0611 06:51:21.108] Successful
I0611 06:51:21.108] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0611 06:51:21.108] has not:not found
I0611 06:51:21.109] Successful
I0611 06:51:21.109] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0611 06:51:21.109] has not:pod or type/name must be specified
I0611 06:51:21.199] Successful
I0611 06:51:21.199] message:Error from server (BadRequest): pod frontend-6vmkw does not have a host assigned
I0611 06:51:21.200] has not:not found
I0611 06:51:21.201] Successful
I0611 06:51:21.201] message:Error from server (BadRequest): pod frontend-6vmkw does not have a host assigned
I0611 06:51:21.201] has not:pod or type/name must be specified
I0611 06:51:21.272] pod "test-pod" deleted
I0611 06:51:21.355] replicaset.extensions "frontend" deleted
I0611 06:51:21.429] configmap "test-set-env-config" deleted
I0611 06:51:21.446] +++ exit code: 0
I0611 06:51:21.474] Recording: run_create_secret_tests
I0611 06:51:21.474] Running command: run_create_secret_tests
I0611 06:51:21.492] 
I0611 06:51:21.494] +++ Running case: test-cmd.run_create_secret_tests 
I0611 06:51:21.496] +++ working dir: /go/src/k8s.io/kubernetes
I0611 06:51:21.498] +++ command: run_create_secret_tests
I0611 06:51:21.585] Successful
I0611 06:51:21.585] message:Error from server (NotFound): secrets "mysecret" not found
I0611 06:51:21.585] has:secrets "mysecret" not found
I0611 06:51:21.739] Successful
I0611 06:51:21.739] message:Error from server (NotFound): secrets "mysecret" not found
I0611 06:51:21.739] has:secrets "mysecret" not found
I0611 06:51:21.741] Successful
I0611 06:51:21.741] message:user-specified
I0611 06:51:21.741] has:user-specified
I0611 06:51:21.816] Successful
I0611 06:51:21.890] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"c762a473-e6a3-4dc2-aad5-ca1ff16371ae","resourceVersion":"837","creationTimestamp":"2019-06-11T06:51:21Z"}}
... skipping 8 lines ...
I0611 06:51:22.133] create.sh:118: Successful get configmaps {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:22.217] (Bcreate.sh:119: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:22.312] (Bcreate.sh:120: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:51:22.533] (Bconfigmap/test-the-map created
I0611 06:51:22.540] service/test-the-service created
I0611 06:51:22.547] deployment.apps/test-the-deployment created
W0611 06:51:22.647] E0611 06:51:21.924160   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:51:22.648] I0611 06:51:22.559503   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235880-4437", Name:"test-the-deployment", UID:"af5d2f80-5636-4bdd-9519-81f23f78cbc0", APIVersion:"apps/v1", ResourceVersion:"845", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-the-deployment-5f6d7c99fd to 3
W0611 06:51:22.648] I0611 06:51:22.565732   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235880-4437", Name:"test-the-deployment-5f6d7c99fd", UID:"827e1193-c397-4c39-b9c2-74cdff29f7c7", APIVersion:"apps/v1", ResourceVersion:"846", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-5f6d7c99fd-qtnqr
W0611 06:51:22.649] I0611 06:51:22.572319   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235880-4437", Name:"test-the-deployment-5f6d7c99fd", UID:"827e1193-c397-4c39-b9c2-74cdff29f7c7", APIVersion:"apps/v1", ResourceVersion:"846", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-5f6d7c99fd-ph2v7
W0611 06:51:22.649] I0611 06:51:22.577611   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235880-4437", Name:"test-the-deployment-5f6d7c99fd", UID:"827e1193-c397-4c39-b9c2-74cdff29f7c7", APIVersion:"apps/v1", ResourceVersion:"846", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-5f6d7c99fd-wrp9x
I0611 06:51:22.749] create.sh:126: Successful get configmap test-the-map {{.metadata.name}}: test-the-map
I0611 06:51:22.750] (Bcreate.sh:127: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
... skipping 140 lines ...
I0611 06:51:24.446] }
I0611 06:51:24.519] request-timeout.sh:34: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0611 06:51:24.597] (BSuccessful
I0611 06:51:24.597] message:NAME        READY   STATUS    RESTARTS   AGE
I0611 06:51:24.597] valid-pod   0/1     Pending   0          0s
I0611 06:51:24.597] has:valid-pod
W0611 06:51:24.698] E0611 06:51:22.925203   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:51:24.698] E0611 06:51:23.926497   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:51:24.928] E0611 06:51:24.927836   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:51:25.688] Successful
I0611 06:51:25.689] message:NAME        READY   STATUS    RESTARTS   AGE
I0611 06:51:25.689] valid-pod   0/1     Pending   0          0s
I0611 06:51:25.689] STATUS      REASON          MESSAGE
I0611 06:51:25.689] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0611 06:51:25.689] has:Timeout exceeded while reading body
I0611 06:51:25.767] Successful
I0611 06:51:25.768] message:NAME        READY   STATUS    RESTARTS   AGE
I0611 06:51:25.768] valid-pod   0/1     Pending   0          1s
I0611 06:51:25.768] has:valid-pod
I0611 06:51:25.847] Successful
I0611 06:51:25.848] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0611 06:51:25.848] has:Invalid timeout value
I0611 06:51:25.932] pod "valid-pod" deleted
I0611 06:51:25.949] +++ exit code: 0
I0611 06:51:25.981] Recording: run_crd_tests
I0611 06:51:25.982] Running command: run_crd_tests
I0611 06:51:26.002] 
I0611 06:51:26.004] +++ Running case: test-cmd.run_crd_tests 
I0611 06:51:26.007] +++ working dir: /go/src/k8s.io/kubernetes
I0611 06:51:26.009] +++ command: run_crd_tests
I0611 06:51:26.019] +++ [0611 06:51:26] Creating namespace namespace-1560235886-29339
I0611 06:51:26.097] namespace/namespace-1560235886-29339 created
I0611 06:51:26.173] Context "test" modified.
I0611 06:51:26.180] +++ [0611 06:51:26] Testing kubectl crd
W0611 06:51:26.281] E0611 06:51:25.928947   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:51:26.404] customresourcedefinition.apiextensions.k8s.io/foos.company.com created
I0611 06:51:26.500] crd.sh:47: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: foos.company.com:
I0611 06:51:26.684] (Bcustomresourcedefinition.apiextensions.k8s.io/bars.company.com created
I0611 06:51:26.800] crd.sh:69: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:
I0611 06:51:26.972] (Bcustomresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0611 06:51:27.073] I0611 06:51:26.930013   48115 client.go:354] parsed scheme: ""
... skipping 225 lines ...
I0611 06:51:30.640] foo.company.com/test patched
I0611 06:51:30.722] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0611 06:51:30.796] (Bfoo.company.com/test patched
I0611 06:51:30.882] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0611 06:51:30.960] (Bfoo.company.com/test patched
I0611 06:51:31.045] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0611 06:51:31.185] (B+++ [0611 06:51:31] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0611 06:51:31.243] {
I0611 06:51:31.243]     "apiVersion": "company.com/v1",
I0611 06:51:31.243]     "kind": "Foo",
I0611 06:51:31.243]     "metadata": {
I0611 06:51:31.243]         "annotations": {
I0611 06:51:31.243]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 319 lines ...
W0611 06:51:57.323] I0611 06:51:57.322339   51471 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for resources.mygroup.example.com
W0611 06:51:57.323] I0611 06:51:57.322406   51471 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for bars.company.com
W0611 06:51:57.323] I0611 06:51:57.322454   51471 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0611 06:51:57.423] I0611 06:51:57.422852   51471 controller_utils.go:1036] Caches are synced for resource quota controller
I0611 06:52:01.490] crd.sh:459: Successful get bars {{len .items}}: 0
I0611 06:52:01.662] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0611 06:52:01.762] Error from server (NotFound): namespaces "non-native-resources" not found
I0611 06:52:01.863] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0611 06:52:01.895] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0611 06:52:02.002] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0611 06:52:02.042] +++ exit code: 0
I0611 06:52:02.082] Recording: run_cmd_with_img_tests
I0611 06:52:02.082] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0611 06:52:02.389] I0611 06:52:02.385098   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235922-3118", Name:"test1-67487cf556", UID:"3c86c310-f5de-4055-b8e1-a3acec4741f5", APIVersion:"apps/v1", ResourceVersion:"993", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-67487cf556-x2l6f
I0611 06:52:02.489] Successful
I0611 06:52:02.490] message:deployment.apps/test1 created
I0611 06:52:02.490] has:deployment.apps/test1 created
I0611 06:52:02.490] deployment.extensions "test1" deleted
I0611 06:52:02.550] Successful
I0611 06:52:02.550] message:error: Invalid image name "InvalidImageName": invalid reference format
I0611 06:52:02.550] has:error: Invalid image name "InvalidImageName": invalid reference format
I0611 06:52:02.563] +++ exit code: 0
I0611 06:52:02.605] +++ [0611 06:52:02] Testing recursive resources
I0611 06:52:02.611] +++ [0611 06:52:02] Creating namespace namespace-1560235922-7347
I0611 06:52:02.688] namespace/namespace-1560235922-7347 created
I0611 06:52:02.761] Context "test" modified.
I0611 06:52:02.856] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:03.153] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:03.155] (BSuccessful
I0611 06:52:03.155] message:pod/busybox0 created
I0611 06:52:03.155] pod/busybox1 created
I0611 06:52:03.156] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0611 06:52:03.156] has:error validating data: kind not set
I0611 06:52:03.249] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:03.423] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0611 06:52:03.424] (BSuccessful
I0611 06:52:03.425] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0611 06:52:03.425] has:Object 'Kind' is missing
I0611 06:52:03.516] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:03.837] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0611 06:52:03.839] (BSuccessful
I0611 06:52:03.840] message:pod/busybox0 replaced
I0611 06:52:03.840] pod/busybox1 replaced
I0611 06:52:03.841] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0611 06:52:03.841] has:error validating data: kind not set
I0611 06:52:03.934] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:04.038] (BSuccessful
I0611 06:52:04.039] message:Name:         busybox0
I0611 06:52:04.039] Namespace:    namespace-1560235922-7347
I0611 06:52:04.039] Priority:     0
I0611 06:52:04.039] Node:         <none>
... skipping 153 lines ...
I0611 06:52:04.052] has:Object 'Kind' is missing
I0611 06:52:04.140] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:04.329] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0611 06:52:04.331] (BSuccessful
I0611 06:52:04.331] message:pod/busybox0 annotated
I0611 06:52:04.331] pod/busybox1 annotated
I0611 06:52:04.332] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0611 06:52:04.332] has:Object 'Kind' is missing
I0611 06:52:04.426] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:04.739] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0611 06:52:04.741] (BSuccessful
I0611 06:52:04.742] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0611 06:52:04.742] pod/busybox0 configured
I0611 06:52:04.742] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0611 06:52:04.742] pod/busybox1 configured
I0611 06:52:04.742] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0611 06:52:04.742] has:error validating data: kind not set
I0611 06:52:04.831] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:05.007] (Bdeployment.apps/nginx created
I0611 06:52:05.109] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0611 06:52:05.205] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0611 06:52:05.380] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0611 06:52:05.382] (BSuccessful
... skipping 38 lines ...
I0611 06:52:05.387]       securityContext: {}
I0611 06:52:05.387]       terminationGracePeriodSeconds: 30
I0611 06:52:05.387] status: {}
I0611 06:52:05.387] has:apps/v1
I0611 06:52:05.468] deployment.extensions "nginx" deleted
W0611 06:52:05.568] W0611 06:52:02.674386   48115 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0611 06:52:05.569] E0611 06:52:02.675910   51471 reflector.go:283] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to watch *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.569] W0611 06:52:02.800094   48115 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0611 06:52:05.569] E0611 06:52:02.801638   51471 reflector.go:283] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to watch *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.570] W0611 06:52:02.910285   48115 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0611 06:52:05.570] E0611 06:52:02.911749   51471 reflector.go:283] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to watch *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.570] W0611 06:52:03.032139   48115 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0611 06:52:05.570] E0611 06:52:03.034085   51471 reflector.go:283] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to watch *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.571] E0611 06:52:03.677334   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.571] E0611 06:52:03.803361   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.571] E0611 06:52:03.913253   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.571] E0611 06:52:04.035915   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.572] E0611 06:52:04.679131   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.572] E0611 06:52:04.804945   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.573] E0611 06:52:04.914806   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.574] I0611 06:52:05.013960   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235922-7347", Name:"nginx", UID:"e7e725b4-e64e-4097-b083-a06f60e1c79c", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-74bc9cdd45 to 3
W0611 06:52:05.574] I0611 06:52:05.019343   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235922-7347", Name:"nginx-74bc9cdd45", UID:"7db198d7-29f5-4ceb-b598-772266eb730d", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74bc9cdd45-8wrlg
W0611 06:52:05.574] I0611 06:52:05.024992   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235922-7347", Name:"nginx-74bc9cdd45", UID:"7db198d7-29f5-4ceb-b598-772266eb730d", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74bc9cdd45-85xvc
W0611 06:52:05.575] I0611 06:52:05.025085   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235922-7347", Name:"nginx-74bc9cdd45", UID:"7db198d7-29f5-4ceb-b598-772266eb730d", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74bc9cdd45-k49q6
W0611 06:52:05.575] E0611 06:52:05.037196   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:05.575] kubectl convert is DEPRECATED and will be removed in a future version.
W0611 06:52:05.575] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0611 06:52:05.676] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:05.758] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:05.762] (BSuccessful
I0611 06:52:05.762] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0611 06:52:05.762] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0611 06:52:05.763] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0611 06:52:05.763] has:Object 'Kind' is missing
I0611 06:52:05.865] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:05.967] (BSuccessful
I0611 06:52:05.967] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0611 06:52:05.967] has:busybox0:busybox1:
I0611 06:52:05.969] Successful
I0611 06:52:05.969] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0611 06:52:05.969] has:Object 'Kind' is missing
I0611 06:52:06.063] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:06.165] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0611 06:52:06.266] E0611 06:52:05.680991   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:06.266] E0611 06:52:05.807467   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:06.267] E0611 06:52:05.916599   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:06.267] E0611 06:52:06.038641   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:06.367] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0611 06:52:06.368] (BSuccessful
I0611 06:52:06.368] message:pod/busybox0 labeled
I0611 06:52:06.368] pod/busybox1 labeled
I0611 06:52:06.369] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0611 06:52:06.369] has:Object 'Kind' is missing
I0611 06:52:06.376] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:06.475] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0611 06:52:06.567] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0611 06:52:06.569] (BSuccessful
I0611 06:52:06.570] message:pod/busybox0 patched
I0611 06:52:06.570] pod/busybox1 patched
I0611 06:52:06.570] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0611 06:52:06.571] has:Object 'Kind' is missing
I0611 06:52:06.667] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:06.863] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:06.865] (BSuccessful
I0611 06:52:06.865] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0611 06:52:06.865] pod "busybox0" force deleted
I0611 06:52:06.866] pod "busybox1" force deleted
I0611 06:52:06.866] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0611 06:52:06.866] has:Object 'Kind' is missing
I0611 06:52:06.971] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:07.145] (Breplicationcontroller/busybox0 created
I0611 06:52:07.151] replicationcontroller/busybox1 created
W0611 06:52:07.251] E0611 06:52:06.682242   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:07.252] E0611 06:52:06.808766   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:07.252] E0611 06:52:06.917975   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:07.252] E0611 06:52:07.039604   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:07.252] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0611 06:52:07.253] I0611 06:52:07.151601   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235922-7347", Name:"busybox0", UID:"f25649ac-b400-44d6-93c3-69076b665719", APIVersion:"v1", ResourceVersion:"1050", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-99q48
W0611 06:52:07.253] I0611 06:52:07.155390   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235922-7347", Name:"busybox1", UID:"cbd2a1e9-aa83-4c1a-80ff-d4588d2e136e", APIVersion:"v1", ResourceVersion:"1052", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-tnw2w
I0611 06:52:07.353] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:07.364] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:07.458] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0611 06:52:07.558] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0611 06:52:07.751] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0611 06:52:07.843] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0611 06:52:07.845] (BSuccessful
I0611 06:52:07.845] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0611 06:52:07.845] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0611 06:52:07.846] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:07.846] has:Object 'Kind' is missing
I0611 06:52:07.923] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0611 06:52:08.022] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0611 06:52:08.119] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:08.218] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0611 06:52:08.316] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0611 06:52:08.511] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0611 06:52:08.605] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0611 06:52:08.608] (BSuccessful
I0611 06:52:08.608] message:service/busybox0 exposed
I0611 06:52:08.608] service/busybox1 exposed
I0611 06:52:08.609] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:08.609] has:Object 'Kind' is missing
I0611 06:52:08.710] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:08.802] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0611 06:52:08.891] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0611 06:52:09.113] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0611 06:52:09.199] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0611 06:52:09.202] (BSuccessful
I0611 06:52:09.202] message:replicationcontroller/busybox0 scaled
I0611 06:52:09.203] replicationcontroller/busybox1 scaled
I0611 06:52:09.203] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:09.203] has:Object 'Kind' is missing
I0611 06:52:09.297] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:09.494] (Bgeneric-resources.sh:381: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:09.496] (BSuccessful
I0611 06:52:09.496] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0611 06:52:09.497] replicationcontroller "busybox0" force deleted
I0611 06:52:09.497] replicationcontroller "busybox1" force deleted
I0611 06:52:09.497] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:09.497] has:Object 'Kind' is missing
I0611 06:52:09.593] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:09.763] (Bdeployment.apps/nginx1-deployment created
I0611 06:52:09.771] deployment.apps/nginx0-deployment created
W0611 06:52:09.872] E0611 06:52:07.683636   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.872] E0611 06:52:07.810674   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.873] E0611 06:52:07.919342   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.873] E0611 06:52:08.041063   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.874] E0611 06:52:08.685092   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.874] E0611 06:52:08.812287   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.875] E0611 06:52:08.920888   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.875] I0611 06:52:08.993853   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235922-7347", Name:"busybox0", UID:"f25649ac-b400-44d6-93c3-69076b665719", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-7hh7p
W0611 06:52:09.876] I0611 06:52:09.004903   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235922-7347", Name:"busybox1", UID:"cbd2a1e9-aa83-4c1a-80ff-d4588d2e136e", APIVersion:"v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-mgq5p
W0611 06:52:09.876] E0611 06:52:09.042139   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.876] E0611 06:52:09.686922   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.877] I0611 06:52:09.770147   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235922-7347", Name:"nginx1-deployment", UID:"1d474b9f-7179-480c-900f-3bbc0632275e", APIVersion:"apps/v1", ResourceVersion:"1091", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-746b5db49 to 2
W0611 06:52:09.877] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0611 06:52:09.877] I0611 06:52:09.775495   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235922-7347", Name:"nginx1-deployment-746b5db49", UID:"a490e57c-0e2d-4d5c-8cb9-a99d7f9ab33d", APIVersion:"apps/v1", ResourceVersion:"1092", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-746b5db49-t72kc
W0611 06:52:09.878] I0611 06:52:09.777530   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235922-7347", Name:"nginx0-deployment", UID:"5b472901-257b-4601-b41e-40281a1a6e75", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-5679bdc9fb to 2
W0611 06:52:09.878] I0611 06:52:09.789894   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235922-7347", Name:"nginx0-deployment-5679bdc9fb", UID:"1b518a46-53b9-45e6-aa14-f38fbd02f6e2", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5679bdc9fb-m4gjg
W0611 06:52:09.878] I0611 06:52:09.794875   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235922-7347", Name:"nginx1-deployment-746b5db49", UID:"a490e57c-0e2d-4d5c-8cb9-a99d7f9ab33d", APIVersion:"apps/v1", ResourceVersion:"1092", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-746b5db49-vmbrs
W0611 06:52:09.879] I0611 06:52:09.795864   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235922-7347", Name:"nginx0-deployment-5679bdc9fb", UID:"1b518a46-53b9-45e6-aa14-f38fbd02f6e2", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5679bdc9fb-47cpn
W0611 06:52:09.879] E0611 06:52:09.813906   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:09.923] E0611 06:52:09.922625   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:10.023] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0611 06:52:10.024] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0611 06:52:10.200] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0611 06:52:10.203] (BSuccessful
I0611 06:52:10.204] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0611 06:52:10.204] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0611 06:52:10.204] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0611 06:52:10.204] has:Object 'Kind' is missing
I0611 06:52:10.300] deployment.apps/nginx1-deployment paused
I0611 06:52:10.311] deployment.apps/nginx0-deployment paused
W0611 06:52:10.412] E0611 06:52:10.043912   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:10.513] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0611 06:52:10.513] (BSuccessful
I0611 06:52:10.513] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0611 06:52:10.514] has:Object 'Kind' is missing
I0611 06:52:10.557] deployment.apps/nginx1-deployment resumed
I0611 06:52:10.567] deployment.apps/nginx0-deployment resumed
I0611 06:52:10.673] generic-resources.sh:408: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0611 06:52:10.676] (BSuccessful
I0611 06:52:10.677] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0611 06:52:10.677] has:Object 'Kind' is missing
W0611 06:52:10.777] E0611 06:52:10.688317   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:10.816] E0611 06:52:10.815912   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:10.863] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0611 06:52:10.884] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0611 06:52:10.925] E0611 06:52:10.924352   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:11.026] Successful
I0611 06:52:11.026] message:deployment.apps/nginx1-deployment 
I0611 06:52:11.026] REVISION  CHANGE-CAUSE
I0611 06:52:11.026] 1         <none>
I0611 06:52:11.026] 
I0611 06:52:11.026] deployment.apps/nginx0-deployment 
I0611 06:52:11.026] REVISION  CHANGE-CAUSE
I0611 06:52:11.026] 1         <none>
I0611 06:52:11.026] 
I0611 06:52:11.027] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0611 06:52:11.027] has:nginx0-deployment
I0611 06:52:11.027] Successful
I0611 06:52:11.027] message:deployment.apps/nginx1-deployment 
I0611 06:52:11.027] REVISION  CHANGE-CAUSE
I0611 06:52:11.027] 1         <none>
I0611 06:52:11.027] 
I0611 06:52:11.027] deployment.apps/nginx0-deployment 
I0611 06:52:11.027] REVISION  CHANGE-CAUSE
I0611 06:52:11.027] 1         <none>
I0611 06:52:11.028] 
I0611 06:52:11.028] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0611 06:52:11.028] has:nginx1-deployment
I0611 06:52:11.028] Successful
I0611 06:52:11.028] message:deployment.apps/nginx1-deployment 
I0611 06:52:11.028] REVISION  CHANGE-CAUSE
I0611 06:52:11.028] 1         <none>
I0611 06:52:11.028] 
I0611 06:52:11.028] deployment.apps/nginx0-deployment 
I0611 06:52:11.028] REVISION  CHANGE-CAUSE
I0611 06:52:11.029] 1         <none>
I0611 06:52:11.029] 
I0611 06:52:11.029] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0611 06:52:11.029] has:Object 'Kind' is missing
I0611 06:52:11.029] deployment.apps "nginx1-deployment" force deleted
I0611 06:52:11.029] deployment.apps "nginx0-deployment" force deleted
W0611 06:52:11.130] E0611 06:52:11.045682   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:11.690] E0611 06:52:11.689877   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:11.818] E0611 06:52:11.817621   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:11.926] E0611 06:52:11.926097   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:12.027] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:12.141] (Breplicationcontroller/busybox0 created
I0611 06:52:12.147] replicationcontroller/busybox1 created
I0611 06:52:12.253] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0611 06:52:12.346] (BSuccessful
I0611 06:52:12.346] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0611 06:52:12.349] message:no rollbacker has been implemented for "ReplicationController"
I0611 06:52:12.349] no rollbacker has been implemented for "ReplicationController"
I0611 06:52:12.350] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:12.350] has:Object 'Kind' is missing
I0611 06:52:12.442] Successful
I0611 06:52:12.442] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:12.443] error: replicationcontrollers "busybox0" pausing is not supported
I0611 06:52:12.443] error: replicationcontrollers "busybox1" pausing is not supported
I0611 06:52:12.443] has:Object 'Kind' is missing
I0611 06:52:12.443] Successful
I0611 06:52:12.444] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:12.444] error: replicationcontrollers "busybox0" pausing is not supported
I0611 06:52:12.444] error: replicationcontrollers "busybox1" pausing is not supported
I0611 06:52:12.444] has:replicationcontrollers "busybox0" pausing is not supported
I0611 06:52:12.446] Successful
I0611 06:52:12.446] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:12.447] error: replicationcontrollers "busybox0" pausing is not supported
I0611 06:52:12.447] error: replicationcontrollers "busybox1" pausing is not supported
I0611 06:52:12.447] has:replicationcontrollers "busybox1" pausing is not supported
I0611 06:52:12.537] Successful
I0611 06:52:12.538] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:12.538] error: replicationcontrollers "busybox0" resuming is not supported
I0611 06:52:12.538] error: replicationcontrollers "busybox1" resuming is not supported
I0611 06:52:12.538] has:Object 'Kind' is missing
I0611 06:52:12.539] Successful
I0611 06:52:12.539] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:12.539] error: replicationcontrollers "busybox0" resuming is not supported
I0611 06:52:12.540] error: replicationcontrollers "busybox1" resuming is not supported
I0611 06:52:12.540] has:replicationcontrollers "busybox0" resuming is not supported
I0611 06:52:12.541] Successful
I0611 06:52:12.542] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0611 06:52:12.542] error: replicationcontrollers "busybox0" resuming is not supported
I0611 06:52:12.542] error: replicationcontrollers "busybox1" resuming is not supported
I0611 06:52:12.542] has:replicationcontrollers "busybox0" resuming is not supported
I0611 06:52:12.620] replicationcontroller "busybox0" force deleted
I0611 06:52:12.626] replicationcontroller "busybox1" force deleted
W0611 06:52:12.726] E0611 06:52:12.047793   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:12.727] I0611 06:52:12.147198   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235922-7347", Name:"busybox0", UID:"a734a891-4854-4888-adb8-a62c8b137767", APIVersion:"v1", ResourceVersion:"1140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-t2mxz
W0611 06:52:12.727] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0611 06:52:12.727] I0611 06:52:12.152645   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235922-7347", Name:"busybox1", UID:"255dd8fc-b70b-4acf-88bf-cf923eee4e73", APIVersion:"v1", ResourceVersion:"1142", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-75fp6
W0611 06:52:12.728] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0611 06:52:12.728] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0611 06:52:12.728] E0611 06:52:12.691591   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:12.823] E0611 06:52:12.823174   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:12.928] E0611 06:52:12.927647   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:13.050] E0611 06:52:13.049165   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:13.632] Recording: run_namespace_tests
I0611 06:52:13.633] Running command: run_namespace_tests
I0611 06:52:13.655] 
I0611 06:52:13.657] +++ Running case: test-cmd.run_namespace_tests 
I0611 06:52:13.659] +++ working dir: /go/src/k8s.io/kubernetes
I0611 06:52:13.661] +++ command: run_namespace_tests
I0611 06:52:13.670] +++ [0611 06:52:13] Testing kubectl(v1:namespaces)
I0611 06:52:13.743] namespace/my-namespace created
I0611 06:52:13.842] core.sh:1312: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0611 06:52:13.921] (Bnamespace "my-namespace" deleted
W0611 06:52:14.022] E0611 06:52:13.693078   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:14.023] E0611 06:52:13.826435   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:14.023] E0611 06:52:13.928794   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:14.051] E0611 06:52:14.050891   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:14.695] E0611 06:52:14.694546   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:14.829] E0611 06:52:14.828792   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:14.931] E0611 06:52:14.930483   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:15.053] E0611 06:52:15.052259   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:15.697] E0611 06:52:15.696298   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:15.831] E0611 06:52:15.830527   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:15.932] E0611 06:52:15.932259   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:16.054] E0611 06:52:16.053639   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:16.699] E0611 06:52:16.698548   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:16.833] E0611 06:52:16.832322   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:16.935] E0611 06:52:16.934805   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:17.056] E0611 06:52:17.055433   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:17.701] E0611 06:52:17.700222   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:17.834] E0611 06:52:17.833917   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:17.937] E0611 06:52:17.937020   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:18.057] E0611 06:52:18.057200   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:18.702] E0611 06:52:18.701990   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:18.836] E0611 06:52:18.835531   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:18.938] E0611 06:52:18.938242   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:19.059] E0611 06:52:19.058343   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:19.159] namespace/my-namespace condition met
I0611 06:52:19.171] Successful
I0611 06:52:19.171] message:Error from server (NotFound): namespaces "my-namespace" not found
I0611 06:52:19.171] has: not found
I0611 06:52:19.254] namespace/my-namespace created
I0611 06:52:19.359] core.sh:1321: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0611 06:52:19.637] (BSuccessful
I0611 06:52:19.637] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0611 06:52:19.638] namespace "kube-node-lease" deleted
... skipping 30 lines ...
I0611 06:52:19.642] namespace "namespace-1560235883-12761" deleted
I0611 06:52:19.642] namespace "namespace-1560235883-12856" deleted
I0611 06:52:19.642] namespace "namespace-1560235886-29339" deleted
I0611 06:52:19.642] namespace "namespace-1560235887-12912" deleted
I0611 06:52:19.642] namespace "namespace-1560235922-3118" deleted
I0611 06:52:19.642] namespace "namespace-1560235922-7347" deleted
I0611 06:52:19.642] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0611 06:52:19.643] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0611 06:52:19.643] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0611 06:52:19.643] has:warning: deleting cluster-scoped resources
I0611 06:52:19.643] Successful
I0611 06:52:19.643] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0611 06:52:19.643] namespace "kube-node-lease" deleted
I0611 06:52:19.643] namespace "my-namespace" deleted
I0611 06:52:19.643] namespace "namespace-1560235788-8858" deleted
... skipping 28 lines ...
I0611 06:52:19.647] namespace "namespace-1560235883-12761" deleted
I0611 06:52:19.647] namespace "namespace-1560235883-12856" deleted
I0611 06:52:19.647] namespace "namespace-1560235886-29339" deleted
I0611 06:52:19.648] namespace "namespace-1560235887-12912" deleted
I0611 06:52:19.648] namespace "namespace-1560235922-3118" deleted
I0611 06:52:19.648] namespace "namespace-1560235922-7347" deleted
I0611 06:52:19.648] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0611 06:52:19.648] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0611 06:52:19.648] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0611 06:52:19.649] has:namespace "my-namespace" deleted
I0611 06:52:19.749] core.sh:1333: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0611 06:52:19.828] (Bnamespace/other created
I0611 06:52:19.933] core.sh:1337: Successful get namespaces/other {{.metadata.name}}: other
I0611 06:52:20.033] (Bcore.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:20.208] (Bpod/valid-pod created
I0611 06:52:20.313] core.sh:1345: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0611 06:52:20.413] (Bcore.sh:1347: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0611 06:52:20.502] (BSuccessful
I0611 06:52:20.502] message:error: a resource cannot be retrieved by name across all namespaces
I0611 06:52:20.502] has:a resource cannot be retrieved by name across all namespaces
W0611 06:52:20.603] E0611 06:52:19.703914   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:20.603] E0611 06:52:19.836891   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:20.604] E0611 06:52:19.940201   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:20.604] I0611 06:52:19.985688   51471 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0611 06:52:20.604] E0611 06:52:20.060910   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:20.604] I0611 06:52:20.086134   51471 controller_utils.go:1036] Caches are synced for garbage collector controller
W0611 06:52:20.699] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0611 06:52:20.705] E0611 06:52:20.705272   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:20.807] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0611 06:52:20.807] (Bpod "valid-pod" force deleted
I0611 06:52:20.819] core.sh:1358: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:20.902] (Bnamespace "other" deleted
W0611 06:52:21.003] E0611 06:52:20.838547   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:21.003] E0611 06:52:20.942236   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:21.063] E0611 06:52:21.062525   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:21.709] E0611 06:52:21.708484   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:21.841] E0611 06:52:21.840370   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:21.944] E0611 06:52:21.943987   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:22.064] E0611 06:52:22.064169   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:22.646] I0611 06:52:22.645329   51471 horizontal.go:340] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1560235922-7347
W0611 06:52:22.651] I0611 06:52:22.651103   51471 horizontal.go:340] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1560235922-7347
W0611 06:52:22.711] E0611 06:52:22.710452   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:22.843] E0611 06:52:22.843197   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:22.946] E0611 06:52:22.945548   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:23.066] E0611 06:52:23.065834   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:23.712] E0611 06:52:23.711800   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:23.845] E0611 06:52:23.844666   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:23.947] E0611 06:52:23.947096   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:24.067] E0611 06:52:24.067343   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:24.716] E0611 06:52:24.715467   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:24.847] E0611 06:52:24.846502   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:24.954] E0611 06:52:24.954138   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:25.070] E0611 06:52:25.069414   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:25.717] E0611 06:52:25.716986   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:25.848] E0611 06:52:25.847917   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:25.957] E0611 06:52:25.955311   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:26.067] +++ exit code: 0
I0611 06:52:26.109] Recording: run_secrets_test
I0611 06:52:26.109] Running command: run_secrets_test
I0611 06:52:26.133] 
I0611 06:52:26.137] +++ Running case: test-cmd.run_secrets_test 
I0611 06:52:26.139] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0611 06:52:28.093] (Bcore.sh:755: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0611 06:52:28.169] (Bsecret "test-secret" deleted
I0611 06:52:28.251] secret/test-secret created
I0611 06:52:28.345] core.sh:761: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0611 06:52:28.431] (Bcore.sh:762: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0611 06:52:28.509] (Bsecret "test-secret" deleted
W0611 06:52:28.610] E0611 06:52:26.072450   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:28.610] I0611 06:52:26.386501   69192 loader.go:359] Config loaded from file:  /tmp/tmp.32XK0AHAAO/.kube/config
W0611 06:52:28.610] E0611 06:52:26.719384   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:28.611] E0611 06:52:26.849328   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:28.611] E0611 06:52:26.956730   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:28.611] E0611 06:52:27.074015   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:28.611] I0611 06:52:27.624704   51471 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0611 06:52:28.611] E0611 06:52:27.720966   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:28.611] I0611 06:52:27.725415   51471 controller_utils.go:1036] Caches are synced for resource quota controller
W0611 06:52:28.612] E0611 06:52:27.850875   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:28.612] E0611 06:52:27.957979   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:28.612] E0611 06:52:28.075311   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:28.712] secret/secret-string-data created
I0611 06:52:28.795] core.sh:784: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0611 06:52:28.876] (Bcore.sh:785: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0611 06:52:28.959] (Bcore.sh:786: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0611 06:52:29.039] (Bsecret "secret-string-data" deleted
I0611 06:52:29.130] core.sh:795: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:29.283] (Bsecret "test-secret" deleted
I0611 06:52:29.360] namespace "test-secrets" deleted
W0611 06:52:29.461] E0611 06:52:28.722417   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:29.461] E0611 06:52:28.852575   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:29.462] E0611 06:52:28.959248   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:29.462] E0611 06:52:29.077304   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:29.724] E0611 06:52:29.723962   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:29.854] E0611 06:52:29.853991   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:29.961] E0611 06:52:29.960860   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:30.079] E0611 06:52:30.078720   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:30.726] E0611 06:52:30.725371   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:30.856] E0611 06:52:30.855349   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:30.963] E0611 06:52:30.962411   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:31.080] E0611 06:52:31.080166   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:31.727] E0611 06:52:31.726696   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:31.857] E0611 06:52:31.856954   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:31.964] E0611 06:52:31.963762   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:32.083] E0611 06:52:32.081604   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:32.728] E0611 06:52:32.728180   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:32.859] E0611 06:52:32.858478   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:32.966] E0611 06:52:32.965380   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:33.083] E0611 06:52:33.082894   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:33.730] E0611 06:52:33.729526   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:33.860] E0611 06:52:33.859763   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:33.967] E0611 06:52:33.966983   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:34.085] E0611 06:52:34.084425   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:34.474] +++ exit code: 0
I0611 06:52:34.508] Recording: run_configmap_tests
I0611 06:52:34.508] Running command: run_configmap_tests
I0611 06:52:34.528] 
I0611 06:52:34.530] +++ Running case: test-cmd.run_configmap_tests 
I0611 06:52:34.532] +++ working dir: /go/src/k8s.io/kubernetes
I0611 06:52:34.534] +++ command: run_configmap_tests
I0611 06:52:34.545] +++ [0611 06:52:34] Creating namespace namespace-1560235954-11564
I0611 06:52:34.617] namespace/namespace-1560235954-11564 created
I0611 06:52:34.683] Context "test" modified.
I0611 06:52:34.690] +++ [0611 06:52:34] Testing configmaps
W0611 06:52:34.791] E0611 06:52:34.731313   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:34.862] E0611 06:52:34.861175   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:34.962] configmap/test-configmap created
I0611 06:52:35.001] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0611 06:52:35.074] (Bconfigmap "test-configmap" deleted
I0611 06:52:35.171] core.sh:33: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-configmaps\" }}found{{end}}{{end}}:: :
I0611 06:52:35.249] (Bnamespace/test-configmaps created
I0611 06:52:35.340] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0611 06:52:35.693] configmap/test-binary-configmap created
I0611 06:52:35.790] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0611 06:52:35.887] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0611 06:52:36.147] (Bconfigmap "test-configmap" deleted
I0611 06:52:36.235] configmap "test-binary-configmap" deleted
I0611 06:52:36.318] namespace "test-configmaps" deleted
W0611 06:52:36.419] E0611 06:52:34.968391   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:36.420] E0611 06:52:35.085793   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:36.420] E0611 06:52:35.732565   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:36.420] E0611 06:52:35.863599   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:36.420] E0611 06:52:35.970016   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:36.420] E0611 06:52:36.088071   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:36.734] E0611 06:52:36.733971   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:36.865] E0611 06:52:36.865296   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:36.972] E0611 06:52:36.971566   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:37.090] E0611 06:52:37.089952   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:37.736] E0611 06:52:37.735521   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:37.867] E0611 06:52:37.867276   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:37.974] E0611 06:52:37.973397   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:38.092] E0611 06:52:38.091401   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:38.737] E0611 06:52:38.737168   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:38.869] E0611 06:52:38.868736   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:38.975] E0611 06:52:38.974878   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:39.093] E0611 06:52:39.092894   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:39.740] E0611 06:52:39.739275   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:39.871] E0611 06:52:39.870660   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:39.977] E0611 06:52:39.976701   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:40.095] E0611 06:52:40.094661   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:40.741] E0611 06:52:40.741112   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:40.873] E0611 06:52:40.872406   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:40.979] E0611 06:52:40.978346   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:41.097] E0611 06:52:41.096770   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:41.500] +++ exit code: 0
I0611 06:52:41.533] Recording: run_client_config_tests
I0611 06:52:41.534] Running command: run_client_config_tests
I0611 06:52:41.552] 
I0611 06:52:41.554] +++ Running case: test-cmd.run_client_config_tests 
I0611 06:52:41.556] +++ working dir: /go/src/k8s.io/kubernetes
I0611 06:52:41.558] +++ command: run_client_config_tests
I0611 06:52:41.571] +++ [0611 06:52:41] Creating namespace namespace-1560235961-7231
I0611 06:52:41.649] namespace/namespace-1560235961-7231 created
I0611 06:52:41.722] Context "test" modified.
I0611 06:52:41.730] +++ [0611 06:52:41] Testing client config
I0611 06:52:41.807] Successful
I0611 06:52:41.808] message:error: stat missing: no such file or directory
I0611 06:52:41.808] has:missing: no such file or directory
I0611 06:52:41.885] Successful
I0611 06:52:41.885] message:error: stat missing: no such file or directory
I0611 06:52:41.885] has:missing: no such file or directory
I0611 06:52:41.962] Successful
I0611 06:52:41.962] message:error: stat missing: no such file or directory
I0611 06:52:41.962] has:missing: no such file or directory
I0611 06:52:42.040] Successful
I0611 06:52:42.040] message:Error in configuration: context was not found for specified context: missing-context
I0611 06:52:42.040] has:context was not found for specified context: missing-context
I0611 06:52:42.119] Successful
I0611 06:52:42.120] message:error: no server found for cluster "missing-cluster"
I0611 06:52:42.120] has:no server found for cluster "missing-cluster"
I0611 06:52:42.191] Successful
I0611 06:52:42.192] message:error: auth info "missing-user" does not exist
I0611 06:52:42.192] has:auth info "missing-user" does not exist
W0611 06:52:42.292] E0611 06:52:41.742753   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:42.293] E0611 06:52:41.874047   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:42.293] E0611 06:52:41.980136   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:42.294] E0611 06:52:42.098832   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:42.394] Successful
I0611 06:52:42.394] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0611 06:52:42.395] has:Error loading config file
I0611 06:52:42.414] Successful
I0611 06:52:42.414] message:error: stat missing-config: no such file or directory
I0611 06:52:42.414] has:no such file or directory
I0611 06:52:42.428] +++ exit code: 0
I0611 06:52:42.468] Recording: run_service_accounts_tests
I0611 06:52:42.468] Running command: run_service_accounts_tests
I0611 06:52:42.491] 
I0611 06:52:42.494] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0611 06:52:42.868] (Bnamespace/test-service-accounts created
I0611 06:52:42.961] core.sh:820: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0611 06:52:43.038] (Bserviceaccount/test-service-account created
I0611 06:52:43.139] core.sh:826: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0611 06:52:43.223] (Bserviceaccount "test-service-account" deleted
I0611 06:52:43.313] namespace "test-service-accounts" deleted
W0611 06:52:43.414] E0611 06:52:42.744391   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:43.415] E0611 06:52:42.875692   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:43.415] E0611 06:52:42.981742   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:43.415] E0611 06:52:43.100509   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:43.746] E0611 06:52:43.746179   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:43.878] E0611 06:52:43.877316   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:43.984] E0611 06:52:43.983508   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:44.104] E0611 06:52:44.103825   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:44.748] E0611 06:52:44.747830   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:44.879] E0611 06:52:44.878962   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:44.985] E0611 06:52:44.984939   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:45.106] E0611 06:52:45.105815   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:45.750] E0611 06:52:45.749504   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:45.881] E0611 06:52:45.880673   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:45.987] E0611 06:52:45.986736   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:46.108] E0611 06:52:46.107550   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:46.752] E0611 06:52:46.751719   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:46.882] E0611 06:52:46.882134   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:46.990] E0611 06:52:46.989662   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:47.112] E0611 06:52:47.111769   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:47.754] E0611 06:52:47.753742   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:47.884] E0611 06:52:47.883778   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:47.991] E0611 06:52:47.991005   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:48.114] E0611 06:52:48.113528   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:48.457] +++ exit code: 0
I0611 06:52:48.494] Recording: run_job_tests
I0611 06:52:48.494] Running command: run_job_tests
I0611 06:52:48.516] 
I0611 06:52:48.518] +++ Running case: test-cmd.run_job_tests 
I0611 06:52:48.521] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 3 lines ...
I0611 06:52:48.698] Context "test" modified.
I0611 06:52:48.706] +++ [0611 06:52:48] Testing job
I0611 06:52:48.803] batch.sh:30: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-jobs\" }}found{{end}}{{end}}:: :
I0611 06:52:48.880] (Bnamespace/test-jobs created
I0611 06:52:48.971] batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
I0611 06:52:49.053] (Bcronjob.batch/pi created
W0611 06:52:49.154] E0611 06:52:48.755102   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:49.155] E0611 06:52:48.884715   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:49.155] E0611 06:52:48.993370   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:49.155] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0611 06:52:49.156] E0611 06:52:49.114890   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:49.256] batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
I0611 06:52:49.256] (BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
I0611 06:52:49.257] pi     59 23 31 2 *   False     0        <none>          0s
I0611 06:52:49.339] Name:                          pi
I0611 06:52:49.339] Namespace:                     test-jobs
I0611 06:52:49.340] Labels:                        run=pi
I0611 06:52:49.340] Annotations:                   <none>
I0611 06:52:49.340] Schedule:                      59 23 31 2 *
I0611 06:52:49.340] Concurrency Policy:            Allow
I0611 06:52:49.340] Suspend:                       False
I0611 06:52:49.341] Successful Job History Limit:  3
I0611 06:52:49.341] Failed Job History Limit:      1
I0611 06:52:49.341] Starting Deadline Seconds:     <unset>
I0611 06:52:49.341] Selector:                      <unset>
I0611 06:52:49.341] Parallelism:                   <unset>
I0611 06:52:49.342] Completions:                   <unset>
I0611 06:52:49.342] Pod Template:
I0611 06:52:49.342]   Labels:  run=pi
... skipping 32 lines ...
I0611 06:52:49.881]                 run=pi
I0611 06:52:49.881] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0611 06:52:49.881] Controlled By:  CronJob/pi
I0611 06:52:49.881] Parallelism:    1
I0611 06:52:49.881] Completions:    1
I0611 06:52:49.881] Start Time:     Tue, 11 Jun 2019 06:52:49 +0000
I0611 06:52:49.881] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0611 06:52:49.881] Pod Template:
I0611 06:52:49.881]   Labels:  controller-uid=7e967030-476d-4eac-ae5e-09bffc1ae582
I0611 06:52:49.882]            job-name=test-job
I0611 06:52:49.882]            run=pi
I0611 06:52:49.882]   Containers:
I0611 06:52:49.882]    pi:
... skipping 16 lines ...
I0611 06:52:49.883]   ----    ------            ----  ----            -------
I0611 06:52:49.883]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-8dnng
I0611 06:52:49.960] job.batch "test-job" deleted
I0611 06:52:50.042] cronjob.batch "pi" deleted
I0611 06:52:50.131] namespace "test-jobs" deleted
W0611 06:52:50.232] I0611 06:52:49.620787   51471 event.go:258] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"7e967030-476d-4eac-ae5e-09bffc1ae582", APIVersion:"batch/v1", ResourceVersion:"1434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-8dnng
W0611 06:52:50.233] E0611 06:52:49.756400   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:50.233] E0611 06:52:49.886105   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:50.233] E0611 06:52:49.994937   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:50.234] E0611 06:52:50.116448   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:50.758] E0611 06:52:50.758097   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:50.888] E0611 06:52:50.888153   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:50.997] E0611 06:52:50.996929   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:51.119] E0611 06:52:51.118345   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:51.760] E0611 06:52:51.759698   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:51.890] E0611 06:52:51.889626   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:51.999] E0611 06:52:51.998488   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:52.120] E0611 06:52:52.119920   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:52.761] E0611 06:52:52.761074   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:52.892] E0611 06:52:52.891418   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:53.000] E0611 06:52:52.999932   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:53.122] E0611 06:52:53.121403   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:53.763] E0611 06:52:53.762781   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:53.893] E0611 06:52:53.893000   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:54.002] E0611 06:52:54.001345   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:54.123] E0611 06:52:54.123191   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:54.765] E0611 06:52:54.764377   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:54.895] E0611 06:52:54.894707   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:55.003] E0611 06:52:55.002952   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:55.126] E0611 06:52:55.125158   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:55.290] +++ exit code: 0
I0611 06:52:55.328] Recording: run_create_job_tests
I0611 06:52:55.328] Running command: run_create_job_tests
I0611 06:52:55.351] 
I0611 06:52:55.353] +++ Running case: test-cmd.run_create_job_tests 
I0611 06:52:55.356] +++ working dir: /go/src/k8s.io/kubernetes
I0611 06:52:55.358] +++ command: run_create_job_tests
I0611 06:52:55.372] +++ [0611 06:52:55] Creating namespace namespace-1560235975-27756
I0611 06:52:55.445] namespace/namespace-1560235975-27756 created
I0611 06:52:55.524] Context "test" modified.
I0611 06:52:55.611] job.batch/test-job created
W0611 06:52:55.712] I0611 06:52:55.610378   51471 event.go:258] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1560235975-27756", Name:"test-job", UID:"058860b0-0a45-47b5-aa14-d12947744f31", APIVersion:"batch/v1", ResourceVersion:"1452", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-m45vw
W0611 06:52:55.767] E0611 06:52:55.766261   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:55.868] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I0611 06:52:55.868] (Bjob.batch "test-job" deleted
I0611 06:52:55.908] job.batch/test-job-pi created
I0611 06:52:56.010] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I0611 06:52:56.093] (Bjob.batch "test-job-pi" deleted
W0611 06:52:56.194] E0611 06:52:55.896147   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:56.195] I0611 06:52:55.903576   51471 event.go:258] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1560235975-27756", Name:"test-job-pi", UID:"201e8cbe-b891-4dac-bdf9-b6a955cf3ff4", APIVersion:"batch/v1", ResourceVersion:"1459", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-bn6n2
W0611 06:52:56.195] E0611 06:52:56.004639   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:56.195] E0611 06:52:56.126828   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:56.196] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
I0611 06:52:56.296] cronjob.batch/test-pi created
I0611 06:52:56.303] job.batch/my-pi created
I0611 06:52:56.395] Successful
I0611 06:52:56.395] message:[perl -Mbignum=bpi -wle print bpi(10)]
I0611 06:52:56.396] has:perl -Mbignum=bpi -wle print bpi(10)
... skipping 8 lines ...
I0611 06:52:56.672] +++ command: run_pod_templates_tests
I0611 06:52:56.687] +++ [0611 06:52:56] Creating namespace namespace-1560235976-29753
I0611 06:52:56.781] namespace/namespace-1560235976-29753 created
I0611 06:52:56.861] Context "test" modified.
I0611 06:52:56.869] +++ [0611 06:52:56] Testing pod templates
W0611 06:52:56.970] I0611 06:52:56.300394   51471 event.go:258] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1560235975-27756", Name:"my-pi", UID:"2d3d12e1-581d-4c9b-a2af-f6597011575d", APIVersion:"batch/v1", ResourceVersion:"1467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-7gsdb
W0611 06:52:56.970] E0611 06:52:56.767729   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:56.970] E0611 06:52:56.897595   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:57.006] E0611 06:52:57.006144   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:57.107] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:52:57.169] (Bpodtemplate/nginx created
W0611 06:52:57.270] E0611 06:52:57.128716   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:57.271] I0611 06:52:57.166163   48115 controller.go:606] quota admission added evaluator for: podtemplates
I0611 06:52:57.371] core.sh:1423: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0611 06:52:57.371] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0611 06:52:57.371] nginx   nginx        nginx    name=nginx
I0611 06:52:57.567] core.sh:1431: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0611 06:52:57.652] (Bpodtemplate "nginx" deleted
... skipping 69 lines ...
I0611 06:52:58.707] Port:              <unset>  6379/TCP
I0611 06:52:58.707] TargetPort:        6379/TCP
I0611 06:52:58.707] Endpoints:         <none>
I0611 06:52:58.707] Session Affinity:  None
I0611 06:52:58.707] Events:            <none>
I0611 06:52:58.707] (B
W0611 06:52:58.807] E0611 06:52:57.769321   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:58.808] E0611 06:52:57.899257   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:58.808] E0611 06:52:58.007530   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:58.809] E0611 06:52:58.130269   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:58.809] E0611 06:52:58.770235   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:58.901] E0611 06:52:58.900429   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:52:59.001] Successful describe services:
I0611 06:52:59.002] Name:              kubernetes
I0611 06:52:59.002] Namespace:         default
I0611 06:52:59.002] Labels:            component=apiserver
I0611 06:52:59.002]                    provider=kubernetes
I0611 06:52:59.002] Annotations:       <none>
... skipping 238 lines ...
I0611 06:52:59.778]   selector:
I0611 06:52:59.778]     role: padawan
I0611 06:52:59.778]   sessionAffinity: None
I0611 06:52:59.778]   type: ClusterIP
I0611 06:52:59.778] status:
I0611 06:52:59.778]   loadBalancer: {}
W0611 06:52:59.879] E0611 06:52:59.008599   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:59.879] E0611 06:52:59.131447   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:59.880] E0611 06:52:59.771978   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:52:59.880] error: you must specify resources by --filename when --local is set.
W0611 06:52:59.880] Example resource specifications include:
W0611 06:52:59.880]    '-f rsrc.yaml'
W0611 06:52:59.880]    '--filename=rsrc.json'
W0611 06:52:59.903] E0611 06:52:59.902925   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:00.004] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0611 06:53:00.106] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0611 06:53:00.204] (Bservice "redis-master" deleted
I0611 06:53:00.295] core.sh:900: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0611 06:53:00.383] (Bcore.sh:904: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0611 06:53:00.551] (Bservice/redis-master created
... skipping 5 lines ...
I0611 06:53:01.228] core.sh:940: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0611 06:53:01.314] (Bservice "redis-master" deleted
I0611 06:53:01.405] service "service-v1-test" deleted
I0611 06:53:01.505] core.sh:948: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0611 06:53:01.587] (Bcore.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0611 06:53:01.738] (Bservice/redis-master created
W0611 06:53:01.839] E0611 06:53:00.010396   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:01.839] E0611 06:53:00.133238   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:01.840] E0611 06:53:00.772913   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:01.840] E0611 06:53:00.904201   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:01.840] E0611 06:53:01.012155   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:01.840] E0611 06:53:01.135479   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:01.840] E0611 06:53:01.774370   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:01.905] E0611 06:53:01.905096   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:02.006] service/redis-slave created
I0611 06:53:02.006] core.sh:957: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0611 06:53:02.072] (BSuccessful
I0611 06:53:02.073] message:NAME           RSRC
I0611 06:53:02.073] kubernetes     145
I0611 06:53:02.073] redis-master   1501
... skipping 81 lines ...
I0611 06:53:06.564]   Volumes:	<none>
I0611 06:53:06.564]  (dry run)
I0611 06:53:06.666] apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0611 06:53:06.760] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0611 06:53:06.849] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0611 06:53:06.965] (Bdaemonset.extensions/bind rolled back
W0611 06:53:07.065] E0611 06:53:02.013832   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.066] E0611 06:53:02.136766   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.066] E0611 06:53:02.776047   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.066] E0611 06:53:02.906456   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.067] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0611 06:53:07.067] I0611 06:53:02.955836   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"5166258f-0038-4256-aa01-24e7bccb5987", APIVersion:"apps/v1", ResourceVersion:"1518", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6b5b57544d to 2
W0611 06:53:07.067] I0611 06:53:02.966800   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6b5b57544d", UID:"a6dbf493-b4d6-45aa-ae0c-0d105f4f2403", APIVersion:"apps/v1", ResourceVersion:"1519", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6b5b57544d-l5s9s
W0611 06:53:07.068] I0611 06:53:02.970281   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6b5b57544d", UID:"a6dbf493-b4d6-45aa-ae0c-0d105f4f2403", APIVersion:"apps/v1", ResourceVersion:"1519", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6b5b57544d-5ctds
W0611 06:53:07.068] E0611 06:53:03.015164   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.068] E0611 06:53:03.137900   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.068] E0611 06:53:03.777437   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.069] E0611 06:53:03.907707   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.069] E0611 06:53:04.016438   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.069] E0611 06:53:04.139244   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.069] I0611 06:53:04.346747   48115 controller.go:606] quota admission added evaluator for: daemonsets.extensions
W0611 06:53:07.070] E0611 06:53:04.778548   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.070] E0611 06:53:04.908788   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.070] E0611 06:53:05.017627   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.070] E0611 06:53:05.140485   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.071] E0611 06:53:05.779794   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.071] E0611 06:53:05.910247   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.071] E0611 06:53:06.018827   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.072] E0611 06:53:06.141806   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.072] E0611 06:53:06.780944   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.072] E0611 06:53:06.911936   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.076] E0611 06:53:06.991079   51471 daemon_controller.go:302] namespace-1560235985-19016/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1560235985-19016", SelfLink:"/apis/apps/v1/namespaces/namespace-1560235985-19016/daemonsets/bind", UID:"2025ea73-0e3b-4317-8b56-1a89c96d2699", ResourceVersion:"1584", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63695832785, loc:(*time.Location)(0x72e8ba0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1560235985-19016\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002524fe0), Fields:(*v1.Fields)(0xc00096ded8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0025255e0), Fields:(*v1.Fields)(0xc00096df90)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002525780), Fields:(*v1.Fields)(0xc002f3e000)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002525980), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0031e1838), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fb83c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0025259a0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002f3e068)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0031e18b0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0611 06:53:07.076] E0611 06:53:07.020610   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:07.144] E0611 06:53:07.143783   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:07.246] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0611 06:53:07.246] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0611 06:53:07.257] (BSuccessful
I0611 06:53:07.257] message:error: unable to find specified revision 1000000 in history
I0611 06:53:07.257] has:unable to find specified revision
I0611 06:53:07.349] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0611 06:53:07.439] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0611 06:53:07.548] (Bdaemonset.extensions/bind rolled back
I0611 06:53:07.655] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0611 06:53:07.744] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 10 lines ...
I0611 06:53:08.061] namespace/namespace-1560235987-6835 created
I0611 06:53:08.128] Context "test" modified.
I0611 06:53:08.134] +++ [0611 06:53:08] Testing kubectl(v1:replicationcontrollers)
I0611 06:53:08.234] core.sh:1034: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:08.427] (Breplicationcontroller/frontend created
I0611 06:53:08.524] replicationcontroller "frontend" deleted
W0611 06:53:08.625] E0611 06:53:07.782265   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:08.625] E0611 06:53:07.915335   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:08.626] E0611 06:53:08.021927   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:08.626] E0611 06:53:08.145115   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:08.626] I0611 06:53:08.433996   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"d0b7acfe-3d1e-44ce-a8b4-77a4955152bc", APIVersion:"v1", ResourceVersion:"1595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8pwq4
W0611 06:53:08.627] I0611 06:53:08.439361   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"d0b7acfe-3d1e-44ce-a8b4-77a4955152bc", APIVersion:"v1", ResourceVersion:"1595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-phpj8
W0611 06:53:08.627] I0611 06:53:08.442462   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"d0b7acfe-3d1e-44ce-a8b4-77a4955152bc", APIVersion:"v1", ResourceVersion:"1595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-c858f
I0611 06:53:08.728] core.sh:1039: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:08.739] (Bcore.sh:1043: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:08.939] (Breplicationcontroller/frontend created
W0611 06:53:09.039] E0611 06:53:08.783807   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:09.040] E0611 06:53:08.917170   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:09.040] I0611 06:53:08.947159   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"67dc5ef4-c180-4944-b5dd-b6397e4209ae", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5x45d
W0611 06:53:09.041] I0611 06:53:08.954134   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"67dc5ef4-c180-4944-b5dd-b6397e4209ae", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rzvz4
W0611 06:53:09.041] I0611 06:53:08.955102   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"67dc5ef4-c180-4944-b5dd-b6397e4209ae", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xwsdc
W0611 06:53:09.042] E0611 06:53:09.023967   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:09.142] core.sh:1047: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0611 06:53:09.207] (Bcore.sh:1049: Successful describe rc frontend:
I0611 06:53:09.208] Name:         frontend
I0611 06:53:09.208] Namespace:    namespace-1560235987-6835
I0611 06:53:09.208] Selector:     app=guestbook,tier=frontend
I0611 06:53:09.208] Labels:       app=guestbook
I0611 06:53:09.208]               tier=frontend
I0611 06:53:09.208] Annotations:  <none>
I0611 06:53:09.209] Replicas:     3 current / 3 desired
I0611 06:53:09.209] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0611 06:53:09.209] Pod Template:
I0611 06:53:09.209]   Labels:  app=guestbook
I0611 06:53:09.209]            tier=frontend
I0611 06:53:09.209]   Containers:
I0611 06:53:09.209]    php-redis:
I0611 06:53:09.209]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0611 06:53:09.333] Namespace:    namespace-1560235987-6835
I0611 06:53:09.333] Selector:     app=guestbook,tier=frontend
I0611 06:53:09.334] Labels:       app=guestbook
I0611 06:53:09.334]               tier=frontend
I0611 06:53:09.334] Annotations:  <none>
I0611 06:53:09.334] Replicas:     3 current / 3 desired
I0611 06:53:09.335] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0611 06:53:09.335] Pod Template:
I0611 06:53:09.335]   Labels:  app=guestbook
I0611 06:53:09.335]            tier=frontend
I0611 06:53:09.335]   Containers:
I0611 06:53:09.336]    php-redis:
I0611 06:53:09.336]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0611 06:53:09.338]   Type    Reason            Age   From                    Message
I0611 06:53:09.339]   ----    ------            ----  ----                    -------
I0611 06:53:09.339]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-5x45d
I0611 06:53:09.339]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-rzvz4
I0611 06:53:09.339]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-xwsdc
I0611 06:53:09.340] (B
W0611 06:53:09.440] E0611 06:53:09.148388   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:09.541] core.sh:1053: Successful describe
I0611 06:53:09.542] Name:         frontend
I0611 06:53:09.542] Namespace:    namespace-1560235987-6835
I0611 06:53:09.542] Selector:     app=guestbook,tier=frontend
I0611 06:53:09.542] Labels:       app=guestbook
I0611 06:53:09.542]               tier=frontend
I0611 06:53:09.542] Annotations:  <none>
I0611 06:53:09.542] Replicas:     3 current / 3 desired
I0611 06:53:09.542] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0611 06:53:09.542] Pod Template:
I0611 06:53:09.542]   Labels:  app=guestbook
I0611 06:53:09.543]            tier=frontend
I0611 06:53:09.543]   Containers:
I0611 06:53:09.543]    php-redis:
I0611 06:53:09.543]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0611 06:53:09.593] Namespace:    namespace-1560235987-6835
I0611 06:53:09.594] Selector:     app=guestbook,tier=frontend
I0611 06:53:09.594] Labels:       app=guestbook
I0611 06:53:09.594]               tier=frontend
I0611 06:53:09.594] Annotations:  <none>
I0611 06:53:09.594] Replicas:     3 current / 3 desired
I0611 06:53:09.595] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0611 06:53:09.595] Pod Template:
I0611 06:53:09.595]   Labels:  app=guestbook
I0611 06:53:09.595]            tier=frontend
I0611 06:53:09.595]   Containers:
I0611 06:53:09.595]    php-redis:
I0611 06:53:09.595]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0611 06:53:09.751] Namespace:    namespace-1560235987-6835
I0611 06:53:09.752] Selector:     app=guestbook,tier=frontend
I0611 06:53:09.752] Labels:       app=guestbook
I0611 06:53:09.752]               tier=frontend
I0611 06:53:09.752] Annotations:  <none>
I0611 06:53:09.752] Replicas:     3 current / 3 desired
I0611 06:53:09.753] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0611 06:53:09.753] Pod Template:
I0611 06:53:09.753]   Labels:  app=guestbook
I0611 06:53:09.753]            tier=frontend
I0611 06:53:09.754]   Containers:
I0611 06:53:09.754]    php-redis:
I0611 06:53:09.754]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0611 06:53:09.871] Namespace:    namespace-1560235987-6835
I0611 06:53:09.871] Selector:     app=guestbook,tier=frontend
I0611 06:53:09.871] Labels:       app=guestbook
I0611 06:53:09.871]               tier=frontend
I0611 06:53:09.871] Annotations:  <none>
I0611 06:53:09.872] Replicas:     3 current / 3 desired
I0611 06:53:09.872] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0611 06:53:09.872] Pod Template:
I0611 06:53:09.872]   Labels:  app=guestbook
I0611 06:53:09.872]            tier=frontend
I0611 06:53:09.872]   Containers:
I0611 06:53:09.872]    php-redis:
I0611 06:53:09.872]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0611 06:53:09.997] Namespace:    namespace-1560235987-6835
I0611 06:53:09.997] Selector:     app=guestbook,tier=frontend
I0611 06:53:09.997] Labels:       app=guestbook
I0611 06:53:09.998]               tier=frontend
I0611 06:53:09.998] Annotations:  <none>
I0611 06:53:09.998] Replicas:     3 current / 3 desired
I0611 06:53:09.998] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0611 06:53:09.998] Pod Template:
I0611 06:53:09.998]   Labels:  app=guestbook
I0611 06:53:09.998]            tier=frontend
I0611 06:53:09.998]   Containers:
I0611 06:53:09.998]    php-redis:
I0611 06:53:09.998]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0611 06:53:10.119] Namespace:    namespace-1560235987-6835
I0611 06:53:10.119] Selector:     app=guestbook,tier=frontend
I0611 06:53:10.119] Labels:       app=guestbook
I0611 06:53:10.119]               tier=frontend
I0611 06:53:10.119] Annotations:  <none>
I0611 06:53:10.120] Replicas:     3 current / 3 desired
I0611 06:53:10.120] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0611 06:53:10.120] Pod Template:
I0611 06:53:10.120]   Labels:  app=guestbook
I0611 06:53:10.120]            tier=frontend
I0611 06:53:10.120]   Containers:
I0611 06:53:10.120]    php-redis:
I0611 06:53:10.120]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0611 06:53:10.121]   ----    ------            ----  ----                    -------
I0611 06:53:10.121]   Normal  SuccessfulCreate  2s    replication-controller  Created pod: frontend-5x45d
I0611 06:53:10.121]   Normal  SuccessfulCreate  2s    replication-controller  Created pod: frontend-rzvz4
I0611 06:53:10.121]   Normal  SuccessfulCreate  2s    replication-controller  Created pod: frontend-xwsdc
I0611 06:53:10.211] (Bcore.sh:1067: Successful get rc frontend {{.spec.replicas}}: 3
I0611 06:53:10.322] (Breplicationcontroller/frontend scaled
W0611 06:53:10.423] E0611 06:53:09.786057   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:10.424] E0611 06:53:09.918844   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:10.424] E0611 06:53:10.025643   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:10.424] E0611 06:53:10.150142   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:10.424] I0611 06:53:10.333015   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"67dc5ef4-c180-4944-b5dd-b6397e4209ae", APIVersion:"v1", ResourceVersion:"1623", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-5x45d
I0611 06:53:10.525] core.sh:1071: Successful get rc frontend {{.spec.replicas}}: 2
I0611 06:53:10.539] (Bcore.sh:1075: Successful get rc frontend {{.spec.replicas}}: 2
I0611 06:53:10.737] (Bcore.sh:1079: Successful get rc frontend {{.spec.replicas}}: 2
I0611 06:53:10.830] (Bcore.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I0611 06:53:10.925] (Breplicationcontroller/frontend scaled
W0611 06:53:11.025] error: Expected replicas to be 3, was 2
W0611 06:53:11.026] E0611 06:53:10.787595   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:11.027] E0611 06:53:10.920459   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:11.027] I0611 06:53:10.931479   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"67dc5ef4-c180-4944-b5dd-b6397e4209ae", APIVersion:"v1", ResourceVersion:"1630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nxls4
W0611 06:53:11.028] E0611 06:53:11.027396   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:11.128] core.sh:1087: Successful get rc frontend {{.spec.replicas}}: 3
I0611 06:53:11.136] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 3
I0611 06:53:11.226] (Breplicationcontroller/frontend scaled
I0611 06:53:11.327] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0611 06:53:11.419] (Breplicationcontroller "frontend" deleted
W0611 06:53:11.520] E0611 06:53:11.151900   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:11.520] I0611 06:53:11.232754   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"67dc5ef4-c180-4944-b5dd-b6397e4209ae", APIVersion:"v1", ResourceVersion:"1635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-nxls4
I0611 06:53:11.621] replicationcontroller/redis-master created
W0611 06:53:11.722] I0611 06:53:11.623223   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-master", UID:"c8b528d9-4770-4c58-9db4-f21d65455af0", APIVersion:"v1", ResourceVersion:"1646", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-5ws6z
W0611 06:53:11.790] E0611 06:53:11.789535   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:11.806] I0611 06:53:11.805378   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-slave", UID:"61add49b-ce6c-4bee-b2bc-bc69b86f3860", APIVersion:"v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-nmtgk
W0611 06:53:11.811] I0611 06:53:11.810889   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-slave", UID:"61add49b-ce6c-4bee-b2bc-bc69b86f3860", APIVersion:"v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-czhkw
W0611 06:53:11.911] I0611 06:53:11.911256   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-master", UID:"c8b528d9-4770-4c58-9db4-f21d65455af0", APIVersion:"v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-dj6bb
W0611 06:53:11.917] I0611 06:53:11.916809   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-master", UID:"c8b528d9-4770-4c58-9db4-f21d65455af0", APIVersion:"v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-lcfc6
W0611 06:53:11.921] I0611 06:53:11.918256   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-master", UID:"c8b528d9-4770-4c58-9db4-f21d65455af0", APIVersion:"v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-7pgng
W0611 06:53:11.924] E0611 06:53:11.924295   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:11.926] I0611 06:53:11.925965   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-slave", UID:"61add49b-ce6c-4bee-b2bc-bc69b86f3860", APIVersion:"v1", ResourceVersion:"1660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-fbdkd
W0611 06:53:11.935] I0611 06:53:11.935090   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-slave", UID:"61add49b-ce6c-4bee-b2bc-bc69b86f3860", APIVersion:"v1", ResourceVersion:"1660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-czxdc
W0611 06:53:12.029] E0611 06:53:12.029192   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:12.130] replicationcontroller/redis-slave created
I0611 06:53:12.131] replicationcontroller/redis-master scaled
I0611 06:53:12.131] replicationcontroller/redis-slave scaled
I0611 06:53:12.132] core.sh:1105: Successful get rc redis-master {{.spec.replicas}}: 4
I0611 06:53:12.132] (Bcore.sh:1106: Successful get rc redis-slave {{.spec.replicas}}: 4
I0611 06:53:12.196] (Breplicationcontroller "redis-master" deleted
I0611 06:53:12.206] replicationcontroller "redis-slave" deleted
W0611 06:53:12.307] E0611 06:53:12.153592   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:12.396] I0611 06:53:12.395692   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment", UID:"8d7e8189-8c40-4105-883a-50ba677e4b47", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-995fc88bd to 3
W0611 06:53:12.401] I0611 06:53:12.401022   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-995fc88bd", UID:"9e09e184-93a5-4162-8849-02f2f40ef229", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-b6h9s
W0611 06:53:12.406] I0611 06:53:12.406088   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-995fc88bd", UID:"9e09e184-93a5-4162-8849-02f2f40ef229", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-h9926
W0611 06:53:12.410] I0611 06:53:12.410182   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-995fc88bd", UID:"9e09e184-93a5-4162-8849-02f2f40ef229", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-b7b5c
I0611 06:53:12.511] deployment.apps/nginx-deployment created
I0611 06:53:12.514] deployment.extensions/nginx-deployment scaled
I0611 06:53:12.615] core.sh:1115: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I0611 06:53:12.692] (Bdeployment.extensions "nginx-deployment" deleted
W0611 06:53:12.793] I0611 06:53:12.519510   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment", UID:"8d7e8189-8c40-4105-883a-50ba677e4b47", APIVersion:"apps/v1", ResourceVersion:"1706", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-995fc88bd to 1
W0611 06:53:12.794] I0611 06:53:12.527488   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-995fc88bd", UID:"9e09e184-93a5-4162-8849-02f2f40ef229", APIVersion:"apps/v1", ResourceVersion:"1707", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-995fc88bd-b7b5c
W0611 06:53:12.794] I0611 06:53:12.527793   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-995fc88bd", UID:"9e09e184-93a5-4162-8849-02f2f40ef229", APIVersion:"apps/v1", ResourceVersion:"1707", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-995fc88bd-h9926
W0611 06:53:12.795] E0611 06:53:12.791251   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:12.896] Successful
I0611 06:53:12.896] message:service/expose-test-deployment exposed
I0611 06:53:12.896] has:service/expose-test-deployment exposed
I0611 06:53:12.896] service "expose-test-deployment" deleted
I0611 06:53:12.980] Successful
I0611 06:53:12.981] message:service/expose-test-deployment exposed
I0611 06:53:12.981] has:service/expose-test-deployment exposed
I0611 06:53:13.064] service "expose-test-deployment" deleted
W0611 06:53:13.165] E0611 06:53:12.925595   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:13.165] E0611 06:53:13.030672   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:13.166] E0611 06:53:13.155500   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:13.266] Successful
I0611 06:53:13.266] message:service/expose-test-deployment exposed
I0611 06:53:13.266] has:service/expose-test-deployment exposed
I0611 06:53:13.266] service "expose-test-deployment" deleted
I0611 06:53:13.345] Successful
I0611 06:53:13.346] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0611 06:53:13.346] See 'kubectl expose -h' for help and examples
I0611 06:53:13.346] has:invalid deployment: no selectors
I0611 06:53:13.430] Successful
I0611 06:53:13.430] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0611 06:53:13.431] See 'kubectl expose -h' for help and examples
I0611 06:53:13.431] has:invalid deployment: no selectors
I0611 06:53:13.592] deployment.apps/nginx-deployment created
W0611 06:53:13.693] I0611 06:53:13.599190   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment", UID:"ab231955-f463-4eb8-9057-c964bba7153d", APIVersion:"apps/v1", ResourceVersion:"1744", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-995fc88bd to 3
W0611 06:53:13.694] I0611 06:53:13.606111   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-995fc88bd", UID:"8f4ddfd4-c984-4755-bdb4-60e78a7a2a72", APIVersion:"apps/v1", ResourceVersion:"1745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-wdvb8
W0611 06:53:13.694] I0611 06:53:13.612154   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-995fc88bd", UID:"8f4ddfd4-c984-4755-bdb4-60e78a7a2a72", APIVersion:"apps/v1", ResourceVersion:"1745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-kb5nr
W0611 06:53:13.694] I0611 06:53:13.612303   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-995fc88bd", UID:"8f4ddfd4-c984-4755-bdb4-60e78a7a2a72", APIVersion:"apps/v1", ResourceVersion:"1745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-dd4nn
W0611 06:53:13.793] E0611 06:53:13.792453   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:13.893] core.sh:1150: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0611 06:53:13.894] (Bservice/nginx-deployment exposed
I0611 06:53:13.894] core.sh:1154: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0611 06:53:13.977] (Bdeployment.extensions "nginx-deployment" deleted
I0611 06:53:13.993] service "nginx-deployment" deleted
W0611 06:53:14.094] E0611 06:53:13.927413   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:14.094] E0611 06:53:14.032263   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:14.158] E0611 06:53:14.157495   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:14.194] I0611 06:53:14.193750   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"18c31e49-7013-45d0-b353-7837ca233b14", APIVersion:"v1", ResourceVersion:"1772", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ksfzv
W0611 06:53:14.200] I0611 06:53:14.199487   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"18c31e49-7013-45d0-b353-7837ca233b14", APIVersion:"v1", ResourceVersion:"1772", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h6v8s
W0611 06:53:14.203] I0611 06:53:14.201993   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"18c31e49-7013-45d0-b353-7837ca233b14", APIVersion:"v1", ResourceVersion:"1772", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4xd97
I0611 06:53:14.304] replicationcontroller/frontend created
I0611 06:53:14.304] core.sh:1161: Successful get rc frontend {{.spec.replicas}}: 3
I0611 06:53:14.390] (Bservice/frontend exposed
I0611 06:53:14.489] core.sh:1165: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0611 06:53:14.577] (Bservice/frontend-2 exposed
I0611 06:53:14.671] core.sh:1169: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
I0611 06:53:14.840] (Bpod/valid-pod created
W0611 06:53:14.941] E0611 06:53:14.794323   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:14.941] E0611 06:53:14.929090   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:15.034] E0611 06:53:15.033872   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:15.135] service/frontend-3 exposed
I0611 06:53:15.136] core.sh:1174: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
I0611 06:53:15.155] (Bservice/frontend-4 exposed
I0611 06:53:15.251] core.sh:1178: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
I0611 06:53:15.340] (Bservice/frontend-5 exposed
I0611 06:53:15.437] core.sh:1182: Successful get service frontend-5 {{(index .spec.ports 0).port}}: 80
I0611 06:53:15.530] (Bpod "valid-pod" deleted
I0611 06:53:15.628] service "frontend" deleted
I0611 06:53:15.637] service "frontend-2" deleted
I0611 06:53:15.643] service "frontend-3" deleted
I0611 06:53:15.651] service "frontend-4" deleted
I0611 06:53:15.668] service "frontend-5" deleted
W0611 06:53:15.769] E0611 06:53:15.159986   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:15.796] E0611 06:53:15.796078   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:15.897] Successful
I0611 06:53:15.897] message:error: cannot expose a Node
I0611 06:53:15.897] has:cannot expose
I0611 06:53:15.898] Successful
I0611 06:53:15.898] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0611 06:53:15.898] has:metadata.name: Invalid value
I0611 06:53:15.971] Successful
I0611 06:53:15.971] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I0611 06:53:15.971] has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I0611 06:53:16.057] service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
W0611 06:53:16.158] E0611 06:53:15.930687   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:16.158] E0611 06:53:16.035282   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:16.162] E0611 06:53:16.162054   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:16.263] Successful
I0611 06:53:16.263] message:service/etcd-server exposed
I0611 06:53:16.263] has:etcd-server exposed
I0611 06:53:16.266] core.sh:1212: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
I0611 06:53:16.368] (Bcore.sh:1213: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
I0611 06:53:16.451] (Bservice "etcd-server" deleted
I0611 06:53:16.550] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0611 06:53:16.630] (Breplicationcontroller "frontend" deleted
I0611 06:53:16.735] core.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:16.825] (Bcore.sh:1227: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:16.995] (Breplicationcontroller/frontend created
W0611 06:53:17.096] E0611 06:53:16.797579   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:17.096] E0611 06:53:16.932092   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:17.097] I0611 06:53:17.000352   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"d9861aac-43c7-4e50-ae1d-b3be64223301", APIVersion:"v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bc4hv
W0611 06:53:17.097] I0611 06:53:17.006341   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"d9861aac-43c7-4e50-ae1d-b3be64223301", APIVersion:"v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-72lnh
W0611 06:53:17.097] I0611 06:53:17.008358   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"d9861aac-43c7-4e50-ae1d-b3be64223301", APIVersion:"v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4xf5b
W0611 06:53:17.098] E0611 06:53:17.036646   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:17.164] E0611 06:53:17.164259   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:17.216] I0611 06:53:17.215156   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-slave", UID:"f1c522ce-a8a1-4080-8901-ad2b751bdf4f", APIVersion:"v1", ResourceVersion:"1844", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-ncnzc
W0611 06:53:17.222] I0611 06:53:17.221392   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"redis-slave", UID:"f1c522ce-a8a1-4080-8901-ad2b751bdf4f", APIVersion:"v1", ResourceVersion:"1844", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-vqk8q
I0611 06:53:17.322] replicationcontroller/redis-slave created
I0611 06:53:17.323] core.sh:1232: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I0611 06:53:17.422] (Bcore.sh:1236: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I0611 06:53:17.514] (Breplicationcontroller "frontend" deleted
I0611 06:53:17.522] replicationcontroller "redis-slave" deleted
I0611 06:53:17.635] core.sh:1240: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:17.734] (Bcore.sh:1244: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:17.926] (Breplicationcontroller/frontend created
I0611 06:53:18.034] core.sh:1247: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0611 06:53:18.117] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
W0611 06:53:18.217] E0611 06:53:17.798890   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:18.218] I0611 06:53:17.932973   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"d6b83956-db97-40a7-b14c-8e611bc44dfb", APIVersion:"v1", ResourceVersion:"1863", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jbswk
W0611 06:53:18.218] E0611 06:53:17.935019   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:18.218] I0611 06:53:17.937968   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"d6b83956-db97-40a7-b14c-8e611bc44dfb", APIVersion:"v1", ResourceVersion:"1863", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gzvmh
W0611 06:53:18.219] I0611 06:53:17.943992   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1560235987-6835", Name:"frontend", UID:"d6b83956-db97-40a7-b14c-8e611bc44dfb", APIVersion:"v1", ResourceVersion:"1863", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vljxn
W0611 06:53:18.219] E0611 06:53:18.038434   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:18.219] E0611 06:53:18.165941   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:18.319] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0611 06:53:18.320] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0611 06:53:18.402] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0611 06:53:18.507] core.sh:1254: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0611 06:53:18.588] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0611 06:53:18.688] Error: required flag(s) "max" not set
W0611 06:53:18.689] 
W0611 06:53:18.689] 
W0611 06:53:18.689] Examples:
W0611 06:53:18.689]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0611 06:53:18.689]   kubectl autoscale deployment foo --min=2 --max=10
W0611 06:53:18.689]   
... skipping 55 lines ...
I0611 06:53:18.944]           limits:
I0611 06:53:18.944]             cpu: 300m
I0611 06:53:18.944]           requests:
I0611 06:53:18.945]             cpu: 300m
I0611 06:53:18.945]       terminationGracePeriodSeconds: 0
I0611 06:53:18.945] status: {}
W0611 06:53:19.046] E0611 06:53:18.800692   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:19.046] E0611 06:53:18.936566   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:19.046] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
W0611 06:53:19.047] E0611 06:53:19.040126   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:19.168] E0611 06:53:19.168102   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:19.216] I0611 06:53:19.215438   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources", UID:"3350b4f6-53fa-4800-8cb0-3a9ff28d66b3", APIVersion:"apps/v1", ResourceVersion:"1884", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-8495f5b88c to 3
W0611 06:53:19.220] I0611 06:53:19.220144   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources-8495f5b88c", UID:"7326d5ad-dbf8-42ec-8718-a811b29db541", APIVersion:"apps/v1", ResourceVersion:"1885", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8495f5b88c-r7ksc
W0611 06:53:19.227] I0611 06:53:19.227060   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources-8495f5b88c", UID:"7326d5ad-dbf8-42ec-8718-a811b29db541", APIVersion:"apps/v1", ResourceVersion:"1885", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8495f5b88c-5kkbq
W0611 06:53:19.230] I0611 06:53:19.229645   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources-8495f5b88c", UID:"7326d5ad-dbf8-42ec-8718-a811b29db541", APIVersion:"apps/v1", ResourceVersion:"1885", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8495f5b88c-bsh6r
I0611 06:53:19.330] deployment.apps/nginx-deployment-resources created
I0611 06:53:19.330] core.sh:1269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0611 06:53:19.421] (Bcore.sh:1270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0611 06:53:19.519] (Bcore.sh:1271: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0611 06:53:19.620] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0611 06:53:19.721] I0611 06:53:19.628029   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources", UID:"3350b4f6-53fa-4800-8cb0-3a9ff28d66b3", APIVersion:"apps/v1", ResourceVersion:"1898", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-68fb698987 to 1
W0611 06:53:19.722] I0611 06:53:19.633946   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources-68fb698987", UID:"cc40a02e-0b55-4a18-9a5f-e873b6ace6d6", APIVersion:"apps/v1", ResourceVersion:"1899", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-68fb698987-cs677
W0611 06:53:19.803] E0611 06:53:19.802660   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:19.904] core.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0611 06:53:19.904] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0611 06:53:20.035] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0611 06:53:20.136] error: unable to find container named redis
W0611 06:53:20.136] E0611 06:53:19.938322   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:20.137] E0611 06:53:20.042618   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:20.137] I0611 06:53:20.064290   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources", UID:"3350b4f6-53fa-4800-8cb0-3a9ff28d66b3", APIVersion:"apps/v1", ResourceVersion:"1908", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-68fb698987 to 0
W0611 06:53:20.137] I0611 06:53:20.073359   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources-68fb698987", UID:"cc40a02e-0b55-4a18-9a5f-e873b6ace6d6", APIVersion:"apps/v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-68fb698987-cs677
W0611 06:53:20.138] I0611 06:53:20.091523   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources", UID:"3350b4f6-53fa-4800-8cb0-3a9ff28d66b3", APIVersion:"apps/v1", ResourceVersion:"1911", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-cb4546d8f to 1
W0611 06:53:20.138] I0611 06:53:20.099754   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources-cb4546d8f", UID:"8f8938c3-0bcd-44fd-8958-d2ffb06ed71b", APIVersion:"apps/v1", ResourceVersion:"1919", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-cb4546d8f-r4wwg
W0611 06:53:20.170] E0611 06:53:20.170080   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:20.271] core.sh:1280: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0611 06:53:20.272] (Bcore.sh:1281: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0611 06:53:20.358] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0611 06:53:20.460] I0611 06:53:20.385679   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources", UID:"3350b4f6-53fa-4800-8cb0-3a9ff28d66b3", APIVersion:"apps/v1", ResourceVersion:"1928", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-8495f5b88c to 2
W0611 06:53:20.460] I0611 06:53:20.394262   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources-8495f5b88c", UID:"7326d5ad-dbf8-42ec-8718-a811b29db541", APIVersion:"apps/v1", ResourceVersion:"1932", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-8495f5b88c-r7ksc
W0611 06:53:20.461] I0611 06:53:20.413072   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560235987-6835", Name:"nginx-deployment-resources", UID:"3350b4f6-53fa-4800-8cb0-3a9ff28d66b3", APIVersion:"apps/v1", ResourceVersion:"1931", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-c97dcbbb4 to 1
... skipping 218 lines ...
I0611 06:53:20.812]     status: "True"
I0611 06:53:20.812]     type: Progressing
I0611 06:53:20.812]   observedGeneration: 4
I0611 06:53:20.812]   replicas: 4
I0611 06:53:20.812]   unavailableReplicas: 4
I0611 06:53:20.812]   updatedReplicas: 1
W0611 06:53:20.912] E0611 06:53:20.804717   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:20.913] error: you must specify resources by --filename when --local is set.
W0611 06:53:20.913] Example resource specifications include:
W0611 06:53:20.913]    '-f rsrc.yaml'
W0611 06:53:20.913]    '--filename=rsrc.json'
W0611 06:53:20.941] E0611 06:53:20.940367   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:21.041] core.sh:1290: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0611 06:53:21.042] (Bcore.sh:1291: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0611 06:53:21.129] (Bcore.sh:1292: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
I0611 06:53:21.224] (Bdeployment.extensions "nginx-deployment-resources" deleted
I0611 06:53:21.254] +++ exit code: 0
I0611 06:53:21.294] Recording: run_deployment_tests
... skipping 4 lines ...
I0611 06:53:21.327] +++ command: run_deployment_tests
I0611 06:53:21.340] +++ [0611 06:53:21] Creating namespace namespace-1560236001-21894
I0611 06:53:21.422] namespace/namespace-1560236001-21894 created
I0611 06:53:21.499] Context "test" modified.
I0611 06:53:21.506] +++ [0611 06:53:21] Testing deployments
I0611 06:53:21.598] deployment.apps/test-nginx-extensions created
W0611 06:53:21.699] E0611 06:53:21.045417   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:21.699] E0611 06:53:21.179883   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:21.700] I0611 06:53:21.606752   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560236001-21894", Name:"test-nginx-extensions", UID:"3571ba17-881b-4e78-b648-e214e1a503b4", APIVersion:"apps/v1", ResourceVersion:"1964", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-6464c67b69 to 1
W0611 06:53:21.701] I0611 06:53:21.616480   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560236001-21894", Name:"test-nginx-extensions-6464c67b69", UID:"aea2c768-0130-4ec0-8099-fb2f5af2285d", APIVersion:"apps/v1", ResourceVersion:"1965", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-6464c67b69-dj97r
I0611 06:53:21.801] apps.sh:188: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx
I0611 06:53:21.840] (BSuccessful
I0611 06:53:21.841] message:10
I0611 06:53:21.841] has not:2
I0611 06:53:21.934] Successful
I0611 06:53:21.935] message:extensions/v1beta1
I0611 06:53:21.935] has:extensions/v1beta1
I0611 06:53:22.027] Successful
I0611 06:53:22.027] message:apps/v1
I0611 06:53:22.027] has:apps/v1
I0611 06:53:22.115] deployment.extensions "test-nginx-extensions" deleted
W0611 06:53:22.216] E0611 06:53:21.806576   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:22.217] E0611 06:53:21.941823   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:22.218] E0611 06:53:22.047225   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:22.218] E0611 06:53:22.181322   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:22.224] I0611 06:53:22.223599   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560236001-21894", Name:"test-nginx-apps", UID:"c156bf0a-95f4-4b20-a1b6-6e6bbafa9ee9", APIVersion:"apps/v1", ResourceVersion:"1978", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-84cc8d94ff to 1
W0611 06:53:22.230] I0611 06:53:22.229438   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560236001-21894", Name:"test-nginx-apps-84cc8d94ff", UID:"ec608782-9eab-46b8-91c6-84481b212996", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-apps-84cc8d94ff-r7bkw
I0611 06:53:22.330] deployment.apps/test-nginx-apps created
I0611 06:53:22.332] apps.sh:203: Successful get deploy test-nginx-apps {{(index .spec.template.spec.containers 0).name}}: nginx
I0611 06:53:22.418] (BSuccessful
I0611 06:53:22.418] message:2
... skipping 12 lines ...
I0611 06:53:22.759]                 pod-template-hash=84cc8d94ff
I0611 06:53:22.759] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0611 06:53:22.759]                 deployment.kubernetes.io/max-replicas: 2
I0611 06:53:22.759]                 deployment.kubernetes.io/revision: 1
I0611 06:53:22.760] Controlled By:  Deployment/test-nginx-apps
I0611 06:53:22.760] Replicas:       1 current / 1 desired
I0611 06:53:22.760] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0611 06:53:22.760] Pod Template:
I0611 06:53:22.760]   Labels:  app=test-nginx-apps
I0611 06:53:22.760]            pod-template-hash=84cc8d94ff
I0611 06:53:22.760]   Containers:
I0611 06:53:22.760]    nginx:
I0611 06:53:22.760]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 31 lines ...
I0611 06:53:22.883] Events:           <none>
I0611 06:53:22.968] (Bdeployment.extensions "test-nginx-apps" deleted
I0611 06:53:23.069] apps.sh:221: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:23.151] (Bdeployment.apps/nginx-with-command created
I0611 06:53:23.257] apps.sh:225: Successful get deploy nginx-with-command {{(index .spec.template.spec.containers 0).name}}: nginx
I0611 06:53:23.350] (Bdeployment.extensions "nginx-with-command" deleted
W0611 06:53:23.451] E0611 06:53:22.808448   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:23.452] E0611 06:53:22.943735   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:23.452] E0611 06:53:23.048657   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:23.453] I0611 06:53:23.158697   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560236001-21894", Name:"nginx-with-command", UID:"3a6c3ab4-8441-448f-a76e-6f24618f057f", APIVersion:"apps/v1", ResourceVersion:"1994", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-6bdd884 to 1
W0611 06:53:23.453] I0611 06:53:23.164055   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560236001-21894", Name:"nginx-with-command-6bdd884", UID:"c95b96ae-2387-4665-8c59-4b3232dcb33d", APIVersion:"apps/v1", ResourceVersion:"1995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-6bdd884-hfzck
W0611 06:53:23.454] E0611 06:53:23.183803   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:23.554] apps.sh:231: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:23.655] (Bdeployment.apps/deployment-with-unixuserid created
W0611 06:53:23.757] I0611 06:53:23.660846   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560236001-21894", Name:"deployment-with-unixuserid", UID:"c4316ed8-8e84-4367-9ee3-b62c62f766c9", APIVersion:"apps/v1", ResourceVersion:"2008", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-56fc88c5d7 to 1
W0611 06:53:23.757] I0611 06:53:23.666136   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560236001-21894", Name:"deployment-with-unixuserid-56fc88c5d7", UID:"f2eb7574-8d09-4e23-9a19-c981e1ac57fb", APIVersion:"apps/v1", ResourceVersion:"2009", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-56fc88c5d7-mhvdv
W0611 06:53:23.810] E0611 06:53:23.810170   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0611 06:53:23.911] apps.sh:235: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
I0611 06:53:23.912] (Bdeployment.extensions "deployment-with-unixuserid" deleted
I0611 06:53:23.961] apps.sh:242: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0611 06:53:24.147] (Bdeployment.apps/nginx-deployment created
W0611 06:53:24.248] E0611 06:53:23.945288   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:24.248] E0611 06:53:24.050431   51471 reflector.go:125] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: Failed to list *unstructured.Unstructured: the server could not find the requested resource
W0611 06:53:24.249] I0611 06:53:24.154704   51471 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1560236001-21894", Name:"nginx-deployment", UID:"ac807821-df3e-4c04-a9a7-161a90daa217", APIVersion:"apps/v1", ResourceVersion:"2022", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-995fc88bd to 3
W0611 06:53:24.249] I0611 06:53:24.161574   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560236001-21894", Name:"nginx-deployment-995fc88bd", UID:"f7672f64-fa95-40d6-bdb2-70d54d255000", APIVersion:"apps/v1", ResourceVersion:"2023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-zcnkv
W0611 06:53:24.249] I0611 06:53:24.168056   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560236001-21894", Name:"nginx-deployment-995fc88bd", UID:"f7672f64-fa95-40d6-bdb2-70d54d255000", APIVersion:"apps/v1", ResourceVersion:"2023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-995fc88bd-sxz6p
W0611 06:53:24.249] I0611 06:53:24.170759   51471 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1560236001-21894", Name:"nginx-deployment-995fc88bd", UID:"f7672f64-fa95-40d6-