This job view page is being replaced by Spyglass soon. Check out the new job view.
PRoomichi: Use ExpectNoError() for e2e/upgrades
ResultFAILURE
Tests 1 failed / 1429 succeeded
Started2019-05-16 01:47
Elapsed36m43s
Revision
Buildergke-prow-containerd-pool-99179761-043t
Refs master:aaec77a9
77955:333f3d8b
pod7103de79-777c-11e9-8ee0-0a580a6c0dad
infra-commite3e353739
pod7103de79-777c-11e9-8ee0-0a580a6c0dad
repok8s.io/kubernetes
repo-commit2ebd40964b8b67b8501f9726bcff8d69b7e8f0df
repos{u'k8s.io/kubernetes': u'master:aaec77a94b67878ca1bdd884f2778f4388d203f2,77955:333f3d8b9a1754e4f4d66b2a6ddea9ca8342d189'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestUnreservePlugin 5.11s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestUnreservePlugin$
I0516 02:14:45.987537  107839 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0516 02:14:45.987654  107839 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0516 02:14:45.987710  107839 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0516 02:14:45.987754  107839 master.go:233] Using reconciler: 
I0516 02:14:45.990393  107839 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:45.990620  107839 client.go:354] parsed scheme: ""
I0516 02:14:45.990679  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:45.990788  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:45.990940  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:45.992037  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:45.992252  107839 store.go:1320] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0516 02:14:45.992329  107839 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:45.992571  107839 client.go:354] parsed scheme: ""
I0516 02:14:45.992635  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:45.992730  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:45.992885  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:45.992997  107839 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0516 02:14:45.993323  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:45.994805  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:45.997381  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:45.997614  107839 store.go:1320] Monitoring events count at <storage-prefix>//events
I0516 02:14:45.997692  107839 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:45.997834  107839 client.go:354] parsed scheme: ""
I0516 02:14:45.997891  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:45.997989  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:45.998142  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:45.998264  107839 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0516 02:14:45.998517  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.001592  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.002621  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.002787  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.003013  107839 store.go:1320] Monitoring limitranges count at <storage-prefix>//limitranges
I0516 02:14:46.003052  107839 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.003151  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.003166  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.003203  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.003250  107839 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0516 02:14:46.003555  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.004079  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.004176  107839 store.go:1320] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0516 02:14:46.004356  107839 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.004443  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.004457  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.004510  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.004553  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.004584  107839 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0516 02:14:46.004847  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.005142  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.005232  107839 store.go:1320] Monitoring secrets count at <storage-prefix>//secrets
I0516 02:14:46.005369  107839 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.005430  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.005442  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.005472  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.005519  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.005551  107839 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0516 02:14:46.005770  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.006067  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.006164  107839 store.go:1320] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0516 02:14:46.006232  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.006326  107839 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0516 02:14:46.006354  107839 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.006426  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.006438  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.006535  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.006602  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.007069  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.007416  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.008067  107839 store.go:1320] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0516 02:14:46.008277  107839 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0516 02:14:46.008259  107839 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.008480  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.008505  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.008540  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.008601  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.008921  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.009021  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.009091  107839 store.go:1320] Monitoring configmaps count at <storage-prefix>//configmaps
I0516 02:14:46.009181  107839 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0516 02:14:46.009259  107839 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.009344  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.009369  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.009405  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.009503  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.010305  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.010366  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.012131  107839 store.go:1320] Monitoring namespaces count at <storage-prefix>//namespaces
I0516 02:14:46.012343  107839 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0516 02:14:46.012783  107839 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.013442  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.013811  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.014221  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.014680  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.014740  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.015069  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.015938  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.017758  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.017812  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.017875  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.018066  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.021884  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.022105  107839 store.go:1320] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0516 02:14:46.022301  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.022329  107839 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.022433  107839 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0516 02:14:46.022471  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.022486  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.022532  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.022727  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.023374  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.023806  107839 store.go:1320] Monitoring nodes count at <storage-prefix>//minions
I0516 02:14:46.023913  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.024199  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.024199  107839 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.024256  107839 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0516 02:14:46.024311  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.024360  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.024402  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.024493  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.026208  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.026365  107839 store.go:1320] Monitoring pods count at <storage-prefix>//pods
I0516 02:14:46.026515  107839 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.026600  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.026622  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.026667  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.026738  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.026796  107839 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0516 02:14:46.027081  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.028201  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.028240  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.028773  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.028864  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.029030  107839 store.go:1320] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0516 02:14:46.029198  107839 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.029339  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.029360  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.029402  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.029480  107839 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0516 02:14:46.029652  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.029969  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.030104  107839 store.go:1320] Monitoring services count at <storage-prefix>//services/specs
I0516 02:14:46.030132  107839 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.030190  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.030243  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.030257  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.030345  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.030383  107839 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0516 02:14:46.030415  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.030675  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.030727  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.030783  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.030798  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.030829  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.030873  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.030914  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.031208  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.031316  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.031398  107839 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.031494  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.031513  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.031593  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.031651  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.032812  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.033148  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.033284  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.033336  107839 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0516 02:14:46.033393  107839 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0516 02:14:46.034603  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.057236  107839 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0516 02:14:46.057295  107839 master.go:425] Enabling API group "authentication.k8s.io".
I0516 02:14:46.057313  107839 master.go:425] Enabling API group "authorization.k8s.io".
I0516 02:14:46.057567  107839 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.057740  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.057787  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.057837  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.058015  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.058517  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.058667  107839 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0516 02:14:46.058860  107839 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.058991  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.059010  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.059086  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.059170  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.059220  107839 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0516 02:14:46.059453  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.060829  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.060978  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.061000  107839 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0516 02:14:46.061008  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.061082  107839 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0516 02:14:46.061238  107839 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.061323  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.061334  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.061365  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.061427  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.061739  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.062241  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.062366  107839 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0516 02:14:46.062506  107839 master.go:425] Enabling API group "autoscaling".
I0516 02:14:46.062728  107839 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.062838  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.063133  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.062472  107839 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0516 02:14:46.063150  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.063797  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.063862  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.064773  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.064814  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.065037  107839 store.go:1320] Monitoring jobs.batch count at <storage-prefix>//jobs
I0516 02:14:46.065224  107839 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0516 02:14:46.065225  107839 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.065347  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.065364  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.065398  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.065477  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.065859  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.065927  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.066022  107839 store.go:1320] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0516 02:14:46.066045  107839 master.go:425] Enabling API group "batch".
I0516 02:14:46.066556  107839 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.066662  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.066676  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.066723  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.066836  107839 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0516 02:14:46.067121  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.067425  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.067512  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.067583  107839 store.go:1320] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0516 02:14:46.067611  107839 master.go:425] Enabling API group "certificates.k8s.io".
I0516 02:14:46.067641  107839 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0516 02:14:46.067792  107839 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.067887  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.067904  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.067943  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.068048  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.068828  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.068913  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.068920  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.068931  107839 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0516 02:14:46.069158  107839 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.069241  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.069249  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.069253  107839 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0516 02:14:46.069253  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.069355  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.069405  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.069551  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.069790  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.069928  107839 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0516 02:14:46.069942  107839 master.go:425] Enabling API group "coordination.k8s.io".
I0516 02:14:46.070141  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.070488  107839 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.070583  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.070601  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.070618  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.070771  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.070847  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.070938  107839 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0516 02:14:46.071188  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.071315  107839 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0516 02:14:46.071424  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.071536  107839 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0516 02:14:46.071537  107839 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.071607  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.071618  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.071649  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.071842  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.072131  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.072225  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.072284  107839 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0516 02:14:46.072320  107839 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0516 02:14:46.072464  107839 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.072568  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.072585  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.072619  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.072686  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.072990  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.073085  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.073155  107839 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0516 02:14:46.073205  107839 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0516 02:14:46.073338  107839 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.073420  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.073432  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.073501  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.073596  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.073884  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.073971  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.074029  107839 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0516 02:14:46.074207  107839 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.074239  107839 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0516 02:14:46.074384  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.074399  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.074431  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.074549  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.074817  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.074928  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.074934  107839 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0516 02:14:46.074977  107839 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0516 02:14:46.075270  107839 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.075416  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.075458  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.075556  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.075648  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.075987  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.076122  107839 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0516 02:14:46.076316  107839 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.076405  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.076428  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.076498  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.076579  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.076637  107839 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0516 02:14:46.076820  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.078869  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.078897  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.078923  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.078979  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.079074  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.079169  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.079211  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.078871  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.081192  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.081338  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.081512  107839 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0516 02:14:46.081582  107839 master.go:425] Enabling API group "extensions".
I0516 02:14:46.081591  107839 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0516 02:14:46.081798  107839 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.081920  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.081937  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.081998  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.082070  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.083389  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.083437  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.083534  107839 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0516 02:14:46.083656  107839 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0516 02:14:46.084336  107839 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.084433  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.084478  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.084518  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.084573  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.084694  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.085272  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.085326  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.085398  107839 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0516 02:14:46.085418  107839 master.go:425] Enabling API group "networking.k8s.io".
I0516 02:14:46.085457  107839 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.085518  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.085525  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.085552  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.085569  107839 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0516 02:14:46.085591  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.085691  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.086050  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.086179  107839 store.go:1320] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0516 02:14:46.086204  107839 master.go:425] Enabling API group "node.k8s.io".
I0516 02:14:46.086427  107839 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.086519  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.086544  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.086579  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.086633  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.086667  107839 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0516 02:14:46.086931  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.087271  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.087368  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.087422  107839 store.go:1320] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0516 02:14:46.087605  107839 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.087666  107839 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0516 02:14:46.087685  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.087699  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.087732  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.087977  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.088675  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.088826  107839 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0516 02:14:46.088843  107839 master.go:425] Enabling API group "policy".
I0516 02:14:46.088875  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.088880  107839 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.089056  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.089073  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.089108  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.089126  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.089166  107839 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0516 02:14:46.089233  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.089434  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.090135  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.090217  107839 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0516 02:14:46.090217  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.090420  107839 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.090521  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.090538  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.090572  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.090615  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.090663  107839 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0516 02:14:46.090839  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.091183  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.091262  107839 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0516 02:14:46.091291  107839 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.091379  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.091396  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.091439  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.091500  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.093053  107839 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0516 02:14:46.093406  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.093408  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.093940  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.094280  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.094606  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.094735  107839 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0516 02:14:46.094774  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.094833  107839 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0516 02:14:46.094925  107839 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.095024  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.095037  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.095103  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.095185  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.096370  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.104302  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.104460  107839 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0516 02:14:46.104530  107839 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.104570  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.104684  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.104697  107839 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0516 02:14:46.104701  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.104778  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.104921  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.105910  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.106015  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.106042  107839 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0516 02:14:46.106132  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.106200  107839 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0516 02:14:46.106319  107839 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.106447  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.106463  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.106497  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.106647  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.106973  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.107107  107839 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0516 02:14:46.107187  107839 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.107280  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.107303  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.107342  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.107418  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.107462  107839 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0516 02:14:46.107668  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.107811  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.107969  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.108035  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.108108  107839 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0516 02:14:46.108282  107839 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.108378  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.108393  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.108391  107839 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0516 02:14:46.108432  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.108832  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.110080  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.110127  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.110283  107839 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0516 02:14:46.110312  107839 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0516 02:14:46.110685  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.110725  107839 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0516 02:14:46.112386  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.113452  107839 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.113560  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.113576  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.113609  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.113737  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.113972  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.114267  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.114405  107839 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0516 02:14:46.114566  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.114608  107839 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.114668  107839 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0516 02:14:46.114698  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.115018  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.115072  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.115119  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.115423  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.115536  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.115573  107839 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0516 02:14:46.115598  107839 master.go:425] Enabling API group "scheduling.k8s.io".
I0516 02:14:46.115623  107839 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0516 02:14:46.115754  107839 master.go:417] Skipping disabled API group "settings.k8s.io".
I0516 02:14:46.115989  107839 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.116081  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.116106  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.116146  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.116207  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.116592  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.116646  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.116731  107839 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0516 02:14:46.116927  107839 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.117047  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.117074  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.117150  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.117052  107839 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0516 02:14:46.117592  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.118368  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.118456  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.118577  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.118700  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.118705  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.118726  107839 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0516 02:14:46.118796  107839 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.118839  107839 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0516 02:14:46.118987  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.119014  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.119724  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.119816  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.120271  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.120376  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.120392  107839 store.go:1320] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0516 02:14:46.120423  107839 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.120476  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.120488  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.120501  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.120581  107839 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0516 02:14:46.120600  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.120658  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.121480  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.121642  107839 store.go:1320] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0516 02:14:46.121824  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.121870  107839 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.121988  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.122009  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.122058  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.121885  107839 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0516 02:14:46.122564  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.122597  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.122884  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.122980  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.123024  107839 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0516 02:14:46.123109  107839 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0516 02:14:46.123196  107839 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.123322  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.123339  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.123374  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.123427  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.123704  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.123862  107839 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0516 02:14:46.123879  107839 master.go:425] Enabling API group "storage.k8s.io".
I0516 02:14:46.124445  107839 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.124549  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.124561  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.124592  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.124651  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.124681  107839 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0516 02:14:46.124899  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.125303  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.125360  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.125501  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.125930  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.126113  107839 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0516 02:14:46.126300  107839 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.126407  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.126431  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.126473  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.126563  107839 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0516 02:14:46.126572  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.126815  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.127174  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.127650  107839 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0516 02:14:46.127838  107839 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.127923  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.127938  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.128015  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.128056  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.128080  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.128113  107839 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0516 02:14:46.128411  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.128777  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.128896  107839 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0516 02:14:46.129112  107839 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.129212  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.129230  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.129264  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.129347  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.129389  107839 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0516 02:14:46.129804  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.130145  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.130317  107839 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0516 02:14:46.130402  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.130427  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.130518  107839 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.130572  107839 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0516 02:14:46.130611  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.130627  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.130667  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.130859  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.131131  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.131259  107839 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0516 02:14:46.131434  107839 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.131500  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.131512  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.131544  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.131595  107839 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0516 02:14:46.131709  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.131451  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.131856  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.132226  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.132735  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.132785  107839 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0516 02:14:46.133021  107839 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.133095  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.133105  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.133135  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.133150  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.133221  107839 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0516 02:14:46.133935  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.134467  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.134495  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.134526  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.135034  107839 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0516 02:14:46.135184  107839 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0516 02:14:46.135341  107839 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.135419  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.135450  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.135497  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.135605  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.135773  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.136176  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.136191  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.136275  107839 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0516 02:14:46.136815  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.136850  107839 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.136890  107839 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0516 02:14:46.136970  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.136991  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.137025  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.137092  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.137422  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.137505  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.137576  107839 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0516 02:14:46.137667  107839 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0516 02:14:46.137763  107839 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.137889  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.137905  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.137941  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.138049  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.139127  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.139437  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.139590  107839 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0516 02:14:46.139784  107839 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.139886  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.139910  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.140042  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.140248  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.140320  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.140366  107839 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0516 02:14:46.140636  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.141222  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.141365  107839 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0516 02:14:46.141552  107839 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.141645  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.141670  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.141755  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.141843  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.141886  107839 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0516 02:14:46.143926  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.144307  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.144434  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.144854  107839 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0516 02:14:46.144981  107839 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0516 02:14:46.145061  107839 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.145154  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.145170  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.145233  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.145284  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.145621  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.145721  107839 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0516 02:14:46.145737  107839 master.go:425] Enabling API group "apps".
I0516 02:14:46.145779  107839 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.145836  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.145846  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.145901  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.145969  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.146035  107839 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0516 02:14:46.146269  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.146892  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.147023  107839 store.go:1320] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0516 02:14:46.147062  107839 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.147121  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.147139  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.147167  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.147235  107839 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0516 02:14:46.147477  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.147624  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.148838  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.148928  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.148991  107839 store.go:1320] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0516 02:14:46.149022  107839 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0516 02:14:46.149083  107839 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d5aa9130-4d17-4785-b3db-8a3ab351e7ea", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 02:14:46.149348  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.149379  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.149418  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.149493  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.149527  107839 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0516 02:14:46.149495  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.149801  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.150089  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.150343  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.150459  107839 store.go:1320] Monitoring events count at <storage-prefix>//events
I0516 02:14:46.150474  107839 master.go:425] Enabling API group "events.k8s.io".
I0516 02:14:46.151399  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.152160  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.152916  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.152980  107839 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0516 02:14:46.153160  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
I0516 02:14:46.154439  107839 watch_cache.go:405] Replace watchCache (rev: 22109) 
W0516 02:14:46.159764  107839 genericapiserver.go:347] Skipping API batch/v2alpha1 because it has no resources.
W0516 02:14:46.174535  107839 genericapiserver.go:347] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0516 02:14:46.238441  107839 genericapiserver.go:347] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0516 02:14:46.239912  107839 genericapiserver.go:347] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0516 02:14:46.243472  107839 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0516 02:14:46.260903  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.260968  107839 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0516 02:14:46.260983  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.260994  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.261013  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.261023  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.261179  107839 wrap.go:47] GET /healthz: (381.582µs) 500
goroutine 10896 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00839a000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00839a000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00481a420, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425a008, 0xc00005e1a0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425a008, 0xc006278300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425a008, 0xc006278200)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425a008, 0xc006278200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007832780, 0xc00448e3e0, 0x73aefc0, 0xc00425a008, 0xc006278200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45642]
I0516 02:14:46.263793  107839 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.909992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45644]
I0516 02:14:46.266885  107839 wrap.go:47] GET /api/v1/services: (1.569651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45644]
I0516 02:14:46.271542  107839 wrap.go:47] GET /api/v1/services: (1.078278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45644]
I0516 02:14:46.274473  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.274524  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.274548  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.274572  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.274583  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.274738  107839 wrap.go:47] GET /healthz: (340.178µs) 500
goroutine 10785 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083ba310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083ba310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0048c4540, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc002e6e028, 0xc0049fe780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc002e6e028, 0xc00620c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc002e6e028, 0xc00620c500)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc002e6e028, 0xc00620c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007952f60, 0xc00448e3e0, 0x73aefc0, 0xc002e6e028, 0xc00620c500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45642]
I0516 02:14:46.275689  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.702729ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45644]
I0516 02:14:46.276476  107839 wrap.go:47] GET /api/v1/services: (1.250665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45642]
I0516 02:14:46.277860  107839 wrap.go:47] GET /api/v1/services: (1.17638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.277924  107839 wrap.go:47] POST /api/v1/namespaces: (1.691489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45644]
I0516 02:14:46.279277  107839 wrap.go:47] GET /api/v1/namespaces/kube-public: (931.772µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.281137  107839 wrap.go:47] POST /api/v1/namespaces: (1.445755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.282400  107839 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (910.447µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.284372  107839 wrap.go:47] POST /api/v1/namespaces: (1.594571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.362636  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.362673  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.362686  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.362696  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.362714  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.362912  107839 wrap.go:47] GET /healthz: (422.236µs) 500
goroutine 10994 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008377030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008377030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00491bd40, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00253bcb8, 0xc002f1c480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e0d00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00253bcb8, 0xc0062e0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007581d40, 0xc00448e3e0, 0x73aefc0, 0xc00253bcb8, 0xc0062e0d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:46.375616  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.375668  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.375683  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.375693  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.375701  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.375864  107839 wrap.go:47] GET /healthz: (381.58µs) 500
goroutine 10877 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083cc770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083cc770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046d6140, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00304a258, 0xc003070480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00304a258, 0xc004287000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00304a258, 0xc004286e00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00304a258, 0xc004286e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007a88960, 0xc00448e3e0, 0x73aefc0, 0xc00304a258, 0xc004286e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.462320  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.462373  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.462388  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.462397  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.462404  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.462548  107839 wrap.go:47] GET /healthz: (374.58µs) 500
goroutine 10980 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083a37a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083a37a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046e3d40, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b422f0, 0xc00349ca80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3800)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b422f0, 0xc0061a3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0071f7680, 0xc00448e3e0, 0x73aefc0, 0xc007b422f0, 0xc0061a3800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:46.478362  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.478407  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.478420  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.478430  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.478439  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.478643  107839 wrap.go:47] GET /healthz: (411.358µs) 500
goroutine 10879 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083cc8c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083cc8c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046d6940, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00304a260, 0xc003070c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00304a260, 0xc004287600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00304a260, 0xc004287400)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00304a260, 0xc004287400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007a88ba0, 0xc00448e3e0, 0x73aefc0, 0xc00304a260, 0xc004287400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.569416  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.569480  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.569506  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.569519  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.569527  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.569675  107839 wrap.go:47] GET /healthz: (397.081µs) 500
goroutine 10881 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083cca80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083cca80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046d6b60, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00304a268, 0xc003071200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00304a268, 0xc004287a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00304a268, 0xc004287900)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00304a268, 0xc004287900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007a890e0, 0xc00448e3e0, 0x73aefc0, 0xc00304a268, 0xc004287900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:46.575498  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.575535  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.575549  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.575559  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.575568  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.575721  107839 wrap.go:47] GET /healthz: (324.6µs) 500
goroutine 11011 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083ccbd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083ccbd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046d6de0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00304a2c8, 0xc003071800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c100)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00304a2c8, 0xc003d5c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007a89260, 0xc00448e3e0, 0x73aefc0, 0xc00304a2c8, 0xc003d5c100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45642]
I0516 02:14:46.662794  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.662846  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.662863  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.662874  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.662883  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.663082  107839 wrap.go:47] GET /healthz: (434.688µs) 500
goroutine 10982 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083a38f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083a38f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046c6780, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b42338, 0xc00349d380, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b42338, 0xc0049d4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b42338, 0xc0049d4400)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b42338, 0xc0049d4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0071f7980, 0xc00448e3e0, 0x73aefc0, 0xc007b42338, 0xc0049d4400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:46.675643  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.675695  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.675710  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.675721  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.675730  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.675944  107839 wrap.go:47] GET /healthz: (444.485µs) 500
goroutine 10996 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083771f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083771f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046b6560, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00253bce0, 0xc002f1cd80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00253bce0, 0xc004628100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00253bce0, 0xc004628000)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00253bce0, 0xc004628000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007581f80, 0xc00448e3e0, 0x73aefc0, 0xc00253bce0, 0xc004628000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.762355  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.762396  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.762410  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.762421  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.762445  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.762624  107839 wrap.go:47] GET /healthz: (433.455µs) 500
goroutine 10998 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083777a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083777a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046b6e80, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00253bd08, 0xc002f1d500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00253bd08, 0xc004628700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00253bd08, 0xc004628600)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00253bd08, 0xc004628600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0079ca9c0, 0xc00448e3e0, 0x73aefc0, 0xc00253bd08, 0xc004628600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:46.775590  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.775630  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.775643  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.775652  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.775661  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.775827  107839 wrap.go:47] GET /healthz: (368.104µs) 500
goroutine 11000 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083778f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083778f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046b70c0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00253bd10, 0xc002f1dc80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00253bd10, 0xc004628b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00253bd10, 0xc004628a00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00253bd10, 0xc004628a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0079caae0, 0xc00448e3e0, 0x73aefc0, 0xc00253bd10, 0xc004628a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.862351  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.862397  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.862411  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.862421  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.862430  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.862591  107839 wrap.go:47] GET /healthz: (394.221µs) 500
goroutine 11002 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008377ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008377ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046b72e0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00253bd18, 0xc00316a480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00253bd18, 0xc004628f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00253bd18, 0xc004628e00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00253bd18, 0xc004628e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0079cac00, 0xc00448e3e0, 0x73aefc0, 0xc00253bd18, 0xc004628e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:46.875661  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.875702  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.875717  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.875727  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.875740  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.875968  107839 wrap.go:47] GET /healthz: (444.598µs) 500
goroutine 10984 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083a3a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083a3a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046c6a00, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b423a0, 0xc00349d980, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5400)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b423a0, 0xc0049d5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0071f7c80, 0xc00448e3e0, 0x73aefc0, 0xc007b423a0, 0xc0049d5400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.962300  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.962336  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.962350  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.962361  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.962371  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.962550  107839 wrap.go:47] GET /healthz: (412.336µs) 500
goroutine 11004 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008377c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008377c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046b7700, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00253bd20, 0xc00316ad80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00253bd20, 0xc004629900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00253bd20, 0xc004629800)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00253bd20, 0xc004629800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0079caea0, 0xc00448e3e0, 0x73aefc0, 0xc00253bd20, 0xc004629800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:46.975566  107839 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 02:14:46.975600  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:46.975615  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:46.975626  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:46.975636  107839 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:46.975848  107839 wrap.go:47] GET /healthz: (428.306µs) 500
goroutine 10986 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083a3c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083a3c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046c6d00, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b423e8, 0xc0031cc000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5d00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b423e8, 0xc0049d5d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0071f7ec0, 0xc00448e3e0, 0x73aefc0, 0xc007b423e8, 0xc0049d5d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:46.986620  107839 client.go:354] parsed scheme: ""
I0516 02:14:46.986655  107839 client.go:354] scheme "" not registered, fallback to default scheme
I0516 02:14:46.986737  107839 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 02:14:46.986822  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:46.987420  107839 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 02:14:46.987498  107839 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 02:14:47.063301  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.063334  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:47.063345  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:47.063354  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:47.063519  107839 wrap.go:47] GET /healthz: (1.330364ms) 500
goroutine 11020 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083ccfc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083ccfc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046d77e0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00304a368, 0xc00311e9a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00304a368, 0xc003d5ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00304a368, 0xc003d5c800)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00304a368, 0xc003d5c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007a89c80, 0xc00448e3e0, 0x73aefc0, 0xc00304a368, 0xc003d5c800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:47.076566  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.076602  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:47.076613  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:47.076622  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:47.076799  107839 wrap.go:47] GET /healthz: (1.39924ms) 500
goroutine 10988 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083a3dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083a3dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046c7cc0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b42400, 0xc004662580, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b42400, 0xc003f9ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b42400, 0xc003f9c900)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b42400, 0xc003f9c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007892060, 0xc00448e3e0, 0x73aefc0, 0xc007b42400, 0xc003f9c900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:47.171197  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.171240  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:47.171252  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:47.171261  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:47.171416  107839 wrap.go:47] GET /healthz: (1.206028ms) 500
goroutine 10990 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083d8230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083d8230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046aa000, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b42410, 0xc0004e29a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b42410, 0xc003f9dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b42410, 0xc003f9dc00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b42410, 0xc003f9dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0078928a0, 0xc00448e3e0, 0x73aefc0, 0xc007b42410, 0xc003f9dc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:47.176150  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.176180  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:47.176191  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:47.176199  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:47.176332  107839 wrap.go:47] GET /healthz: (959.079µs) 500
goroutine 11022 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083cd260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083cd260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046d7ca0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00304a440, 0xc0046629a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00304a440, 0xc003d5d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00304a440, 0xc003d5d200)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00304a440, 0xc003d5d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0077e2240, 0xc00448e3e0, 0x73aefc0, 0xc00304a440, 0xc003d5d200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:47.263696  107839 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.313488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:47.264084  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.785569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45642]
I0516 02:14:47.266942  107839 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.030304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:47.267221  107839 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.136683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45642]
I0516 02:14:47.267464  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.267481  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:47.267491  107839 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 02:14:47.267499  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 02:14:47.267654  107839 wrap.go:47] GET /healthz: (3.268344ms) 500
goroutine 11042 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083ea690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083ea690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046efe20, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc0030042e8, 0xc00396c2c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc0030042e8, 0xc004c66d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc0030042e8, 0xc004c66c00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc0030042e8, 0xc004c66c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007915c20, 0xc00448e3e0, 0x73aefc0, 0xc0030042e8, 0xc004c66c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45692]
I0516 02:14:47.267913  107839 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0516 02:14:47.271408  107839 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (3.115268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:47.271930  107839 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (3.59571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45642]
I0516 02:14:47.274106  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.216286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.276705  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.275741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.278471  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.278497  107839 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 02:14:47.278508  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.278661  107839 wrap.go:47] GET /healthz: (2.310833ms) 500
goroutine 11046 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083ea9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083ea9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0046908c0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc003004368, 0xc0039fc000, 0x14b, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc003004368, 0xc004c67c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc003004368, 0xc004c67b00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc003004368, 0xc004c67b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007660f00, 0xc00448e3e0, 0x73aefc0, 0xc003004368, 0xc004c67b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:47.278891  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.814793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.282345  107839 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (9.925185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45642]
I0516 02:14:47.282361  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (3.027816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.282828  107839 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0516 02:14:47.282865  107839 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0516 02:14:47.283755  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.010208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.284899  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (781.203µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.286446  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.065022ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.292454  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (5.653256ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.294465  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (985.427µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.296167  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.137115ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.298784  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.188372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.299915  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0516 02:14:47.301241  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.025185ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.303720  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.702861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.303940  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0516 02:14:47.305329  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.200771ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.310325  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.399716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.310526  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0516 02:14:47.311731  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.050983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.313807  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.674428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.314039  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0516 02:14:47.315229  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.043155ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.317286  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.696499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.317455  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0516 02:14:47.318287  107839 cacher.go:739] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I0516 02:14:47.319108  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.499829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.321321  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.865234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.321617  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0516 02:14:47.322992  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.09794ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.326600  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.118732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.327608  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0516 02:14:47.329139  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.348554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.331195  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.630622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.331455  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0516 02:14:47.332657  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.002088ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.334856  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.610164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.335215  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0516 02:14:47.336259  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (833.325µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.338414  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.726387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.339134  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0516 02:14:47.340200  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (835.322µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.342211  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.613356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.342646  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0516 02:14:47.343905  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (935.099µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.347765  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.671857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.348340  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0516 02:14:47.349397  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (861.231µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.351376  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.351799  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0516 02:14:47.353386  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.351211ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.356175  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.94272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.356439  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0516 02:14:47.357632  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (972.933µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.359501  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.411868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.359695  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0516 02:14:47.361372  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.504411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.363067  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.363094  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.363251  107839 wrap.go:47] GET /healthz: (1.134367ms) 500
goroutine 11114 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008446380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008446380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00447fd60, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b42b30, 0xc003c35540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9b00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b42b30, 0xc0030d9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006f45740, 0xc00448e3e0, 0x73aefc0, 0xc007b42b30, 0xc0030d9b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:47.364587  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.860836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.364836  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0516 02:14:47.366023  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (882.313µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.368000  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.454266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.368224  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0516 02:14:47.369292  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (849.653µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.371245  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.508199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.371552  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0516 02:14:47.372638  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (802.354µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.374965  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.868564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.375191  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0516 02:14:47.376591  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.376623  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.376627  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.241598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.376824  107839 wrap.go:47] GET /healthz: (1.357585ms) 500
goroutine 10975 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0083bbdc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0083bbdc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0043de240, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc002e6e3e0, 0xc004a6ea00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fd00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc002e6e3e0, 0xc003c3fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005d58240, 0xc00448e3e0, 0x73aefc0, 0xc002e6e3e0, 0xc003c3fd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:47.378498  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.43815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.378812  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0516 02:14:47.379904  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (791.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.381898  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.564208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.382112  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0516 02:14:47.383153  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (806.533µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.385447  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.839427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.385699  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0516 02:14:47.386841  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (897.728µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.391602  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.657883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.408905  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0516 02:14:47.411543  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (2.252035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.413829  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.744666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.414069  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0516 02:14:47.415217  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (985.63µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.417237  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.591625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.417448  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0516 02:14:47.418461  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (867.456µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.420491  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.662745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.420728  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0516 02:14:47.422081  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.173313ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.423890  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.321587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.424182  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0516 02:14:47.425313  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (941.873µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.429497  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.83696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.429724  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0516 02:14:47.431023  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (933.219µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.433020  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.57196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.433323  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0516 02:14:47.434497  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (952.101µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.436908  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.708923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.437434  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0516 02:14:47.438465  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (794.554µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.440850  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.687851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.450768  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0516 02:14:47.452033  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (950.499µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.454537  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.881386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.455122  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0516 02:14:47.456167  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (816.998µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.458305  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.490564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.458542  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0516 02:14:47.459581  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (833.867µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.461598  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.444876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.462415  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0516 02:14:47.466146  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.466190  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.466457  107839 wrap.go:47] GET /healthz: (4.406019ms) 500
goroutine 11227 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00848b340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00848b340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00429eb00, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000b4cec8, 0xc003102280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000b4cec8, 0xc003337700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000b4cec8, 0xc003337600)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000b4cec8, 0xc003337600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0063c4480, 0xc00448e3e0, 0x73aefc0, 0xc000b4cec8, 0xc003337600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:47.467282  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (919.907µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.469517  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.796684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.470020  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0516 02:14:47.471125  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (889.878µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.472985  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.372876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.473396  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0516 02:14:47.474410  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (794.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.490347  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.490379  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.490549  107839 wrap.go:47] GET /healthz: (10.155274ms) 500
goroutine 11256 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084d4380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084d4380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0042620c0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000c64f50, 0xc0036a08c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9200)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000c64f50, 0xc0035b9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006312ea0, 0xc00448e3e0, 0x73aefc0, 0xc000c64f50, 0xc0035b9200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:47.491359  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (11.529723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.491662  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0516 02:14:47.492935  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.021327ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.497208  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.772311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.497517  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0516 02:14:47.498588  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (816.362µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.500601  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.643199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.500873  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0516 02:14:47.502081  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (981.861µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.504079  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.594535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.504322  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0516 02:14:47.505310  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (815.856µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.507161  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.507388  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0516 02:14:47.509043  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.377707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.511042  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.544113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.511251  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0516 02:14:47.512170  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (768.127µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.513924  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.380003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.514230  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0516 02:14:47.515284  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (834.827µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.517152  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.515173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.517397  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0516 02:14:47.518454  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (891.667µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.520302  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.453439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.520485  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0516 02:14:47.521486  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (790.982µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.523306  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.426563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.523728  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0516 02:14:47.524782  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (771.503µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.528190  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.018133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.528500  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0516 02:14:47.530387  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.736487ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.532274  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.489178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.532648  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0516 02:14:47.533858  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (847.972µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.536354  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.086211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.536594  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0516 02:14:47.538789  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (2.038877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.540855  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.56645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.541263  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0516 02:14:47.542832  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.356236ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.549385  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.124437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.549916  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0516 02:14:47.551129  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (885.266µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.558001  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.338902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.558319  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0516 02:14:47.560029  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.38433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.564865  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.564897  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.565088  107839 wrap.go:47] GET /healthz: (2.990696ms) 500
goroutine 11334 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0094d8bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0094d8bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00417ad40, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000b4d518, 0xc003c35a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000b4d518, 0xc005f12100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000b4d518, 0xc005f12000)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000b4d518, 0xc005f12000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004dc50e0, 0xc00448e3e0, 0x73aefc0, 0xc000b4d518, 0xc005f12000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45692]
I0516 02:14:47.572559  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.6817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.572804  107839 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0516 02:14:47.592388  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.342591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.613358  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.354154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.613763  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0516 02:14:47.619232  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.619264  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.619422  107839 wrap.go:47] GET /healthz: (1.065899ms) 500
goroutine 11338 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0094d9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0094d9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00417bb40, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000b4d608, 0xc002faa3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000b4d608, 0xc005f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000b4d608, 0xc005f12f00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000b4d608, 0xc005f12f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003f92600, 0xc00448e3e0, 0x73aefc0, 0xc000b4d608, 0xc005f12f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.632196  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.204605ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.653320  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.225639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.653568  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0516 02:14:47.664413  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.664450  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.664616  107839 wrap.go:47] GET /healthz: (1.211552ms) 500
goroutine 11347 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009509960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009509960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004126020, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000c65638, 0xc003102a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000c65638, 0xc0059b4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000c65638, 0xc0059b4b00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000c65638, 0xc0059b4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0059ffc20, 0xc00448e3e0, 0x73aefc0, 0xc000c65638, 0xc0059b4b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45692]
I0516 02:14:47.672117  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.126876ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.676227  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.676255  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.676429  107839 wrap.go:47] GET /healthz: (1.055924ms) 500
goroutine 11363 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084b99d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084b99d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004199c20, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc003004f30, 0xc004a6f040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc003004f30, 0xc006418400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc003004f30, 0xc006418300)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc003004f30, 0xc006418300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004cd6ae0, 0xc00448e3e0, 0x73aefc0, 0xc003004f30, 0xc006418300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.692992  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.034916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.693444  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0516 02:14:47.712278  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.269269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.733212  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.259323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.733429  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0516 02:14:47.752801  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.771072ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.763484  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.763528  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.763684  107839 wrap.go:47] GET /healthz: (1.17378ms) 500
goroutine 11365 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084b9b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084b9b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00411e120, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc003004f78, 0xc0036a12c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc003004f78, 0xc006418a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc003004f78, 0xc006418900)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc003004f78, 0xc006418900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004cd77a0, 0xc00448e3e0, 0x73aefc0, 0xc003004f78, 0xc006418900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45692]
I0516 02:14:47.776939  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.72977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.777222  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0516 02:14:47.778996  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.779024  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.779178  107839 wrap.go:47] GET /healthz: (1.223952ms) 500
goroutine 11351 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009509f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009509f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004126a80, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000c656d0, 0xc003103180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000c656d0, 0xc005776b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000c656d0, 0xc005776600)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000c656d0, 0xc005776600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004a1e780, 0xc00448e3e0, 0x73aefc0, 0xc000c656d0, 0xc005776600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.792204  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.246844ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.816457  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.003069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.816789  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0516 02:14:47.833009  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.029392ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.852910  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.950619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.853213  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0516 02:14:47.863398  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.863430  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.863617  107839 wrap.go:47] GET /healthz: (1.153587ms) 500
goroutine 11385 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00952d810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00952d810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0040c0520, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000b4d920, 0xc002faaf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef600)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000b4d920, 0xc0084ef600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0043fed80, 0xc00448e3e0, 0x73aefc0, 0xc000b4d920, 0xc0084ef600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45692]
I0516 02:14:47.872021  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.105908ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.876179  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.876208  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.876351  107839 wrap.go:47] GET /healthz: (1.02301ms) 500
goroutine 11356 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00954c7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00954c7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004127dc0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000c657f0, 0xc002fab680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000c657f0, 0xc009008c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000c657f0, 0xc009008b00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000c657f0, 0xc009008b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003d953e0, 0xc00448e3e0, 0x73aefc0, 0xc000c657f0, 0xc009008b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.893175  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.153572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.893441  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0516 02:14:47.912343  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.329258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.933362  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.336224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.933782  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0516 02:14:47.952044  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.120858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.963546  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.963580  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.963770  107839 wrap.go:47] GET /healthz: (1.353802ms) 500
goroutine 11377 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009562b60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009562b60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004096a60, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc003005380, 0xc00307a500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc003005380, 0xc00995c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc003005380, 0xc00995c600)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc003005380, 0xc00995c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003c45d40, 0xc00448e3e0, 0x73aefc0, 0xc003005380, 0xc00995c600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45692]
I0516 02:14:47.981807  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:47.981844  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:47.982056  107839 wrap.go:47] GET /healthz: (6.715787ms) 500
goroutine 11392 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00958a700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00958a700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004094be0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000b4db08, 0xc004a6f680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000b4db08, 0xc009645700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000b4db08, 0xc009645600)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000b4db08, 0xc009645600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0037deba0, 0xc00448e3e0, 0x73aefc0, 0xc000b4db08, 0xc009645600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:47.982666  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.431279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:47.983029  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0516 02:14:47.991887  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.003386ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.013557  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.496768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.013797  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0516 02:14:48.032274  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.257683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.053479  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.209469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.053794  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0516 02:14:48.063201  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.063235  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.063408  107839 wrap.go:47] GET /healthz: (1.316666ms) 500
goroutine 10937 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00839bb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00839bb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00466b4e0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425a258, 0xc003418dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425a258, 0xc009088a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425a258, 0xc009088900)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425a258, 0xc009088900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007833f20, 0xc00448e3e0, 0x73aefc0, 0xc00425a258, 0xc009088900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45692]
I0516 02:14:48.072084  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.140262ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.076342  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.076370  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.076538  107839 wrap.go:47] GET /healthz: (1.243427ms) 500
goroutine 10939 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00839bce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00839bce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00466ba80, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425a298, 0xc003419400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425a298, 0xc009089200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425a298, 0xc009089100)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425a298, 0xc009089100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00362ea20, 0xc00448e3e0, 0x73aefc0, 0xc00425a298, 0xc009089100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.092688  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.743236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.092976  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0516 02:14:48.112149  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.173872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.133084  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.128471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.133360  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0516 02:14:48.152173  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.169353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.166482  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.166512  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.166673  107839 wrap.go:47] GET /healthz: (2.80455ms) 500
goroutine 10943 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0095a6380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0095a6380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00400ab20, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425a3b0, 0xc003419900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8a00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425a3b0, 0xc008ee8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0022f0cc0, 0xc00448e3e0, 0x73aefc0, 0xc00425a3b0, 0xc008ee8a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45692]
I0516 02:14:48.176680  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.40032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:48.177715  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.177740  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.177908  107839 wrap.go:47] GET /healthz: (1.892246ms) 500
goroutine 11399 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00958aee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00958aee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fd0480, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000b4dc40, 0xc0036ff900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b600)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000b4dc40, 0xc00923b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0029c5740, 0xc00448e3e0, 0x73aefc0, 0xc000b4dc40, 0xc00923b600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.178463  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0516 02:14:48.192115  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.177073ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.218430  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.390733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.218711  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0516 02:14:48.232178  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.229136ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.253153  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.143569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.253376  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0516 02:14:48.263204  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.263238  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.263404  107839 wrap.go:47] GET /healthz: (1.244936ms) 500
goroutine 11401 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00958b8f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00958b8f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fd1a20, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000b4dd98, 0xc000078b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000b4dd98, 0xc006b74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000b4dd98, 0xc009ba5f00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000b4dd98, 0xc009ba5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0015ab860, 0xc00448e3e0, 0x73aefc0, 0xc000b4dd98, 0xc009ba5f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:48.272075  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.169611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.276387  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.276413  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.276573  107839 wrap.go:47] GET /healthz: (1.108303ms) 500
goroutine 11423 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0084c7d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0084c7d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fc7a60, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b43548, 0xc002fabe00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b43548, 0xc008cd9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b43548, 0xc008cd9500)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b43548, 0xc008cd9500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002424a80, 0xc00448e3e0, 0x73aefc0, 0xc007b43548, 0xc008cd9500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.294067  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.926486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.294382  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0516 02:14:48.312683  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.285511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.333152  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.15843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.333471  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0516 02:14:48.352359  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.275874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.366034  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.366066  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.366239  107839 wrap.go:47] GET /healthz: (4.054155ms) 500
goroutine 11431 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0095a7a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0095a7a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fb16e0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425a8b0, 0xc003988640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa700)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425a8b0, 0xc0073aa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001fedf80, 0xc00448e3e0, 0x73aefc0, 0xc00425a8b0, 0xc0073aa700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:48.373139  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.261096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.373400  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0516 02:14:48.376157  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.376183  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.376327  107839 wrap.go:47] GET /healthz: (979.261µs) 500
goroutine 11443 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0095c3180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0095c3180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fdf800, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00304be50, 0xc000079a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00304be50, 0xc009a0dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00304be50, 0xc009a0db00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00304be50, 0xc009a0db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001e863c0, 0xc00448e3e0, 0x73aefc0, 0xc00304be50, 0xc009a0db00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.392203  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.254512ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.413188  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.18497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.413456  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0516 02:14:48.432275  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.260811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.453085  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.060596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.453366  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0516 02:14:48.463620  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.463654  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.463826  107839 wrap.go:47] GET /healthz: (1.637011ms) 500
goroutine 11453 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0095f8380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0095f8380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f299c0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000155060, 0xc003988b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000155060, 0xc007af7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000155060, 0xc007af7400)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000155060, 0xc007af7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001333b60, 0xc00448e3e0, 0x73aefc0, 0xc000155060, 0xc007af7400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:48.472157  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.196654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.476186  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.476219  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.476371  107839 wrap.go:47] GET /healthz: (1.006276ms) 500
goroutine 11461 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0095d7420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0095d7420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f977e0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b438c0, 0xc002568b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b438c0, 0xc00786f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b438c0, 0xc00786f000)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b438c0, 0xc00786f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001c83080, 0xc00448e3e0, 0x73aefc0, 0xc007b438c0, 0xc00786f000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.493134  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.112289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.493396  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0516 02:14:48.512170  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.126362ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.532895  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.871313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.533209  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0516 02:14:48.552867  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.608047ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.563032  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.563067  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.563217  107839 wrap.go:47] GET /healthz: (1.111515ms) 500
goroutine 11479 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0095e49a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0095e49a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026c8ba0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000b4dfd8, 0xc002569180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2400)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000b4dfd8, 0xc007ce2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003934420, 0xc00448e3e0, 0x73aefc0, 0xc000b4dfd8, 0xc007ce2400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:48.574545  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.630809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.574822  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0516 02:14:48.576646  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.576670  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.576848  107839 wrap.go:47] GET /healthz: (1.318584ms) 500
goroutine 11468 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009616150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009616150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0026a1640, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b43b58, 0xc003e47040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe400)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b43b58, 0xc007cfe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001563da0, 0xc00448e3e0, 0x73aefc0, 0xc007b43b58, 0xc007cfe400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.592165  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.164758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.614695  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.346477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.615009  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0516 02:14:48.632116  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.124962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.653211  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.152914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.653445  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0516 02:14:48.666311  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.666347  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.666520  107839 wrap.go:47] GET /healthz: (3.049619ms) 500
goroutine 11438 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0096068c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0096068c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0011322a0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425ab98, 0xc0034d8640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425ab98, 0xc0073abd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425ab98, 0xc0073abc00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425ab98, 0xc0073abc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0029c2300, 0xc00448e3e0, 0x73aefc0, 0xc00425ab98, 0xc0073abc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:48.672157  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.227805ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.676184  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.676217  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.676364  107839 wrap.go:47] GET /healthz: (1.076179ms) 500
goroutine 11486 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0095e5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0095e5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00127a320, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000192390, 0xc0039892c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000192390, 0xc0081f4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000192390, 0xc0081f4100)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000192390, 0xc0081f4100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003a1c960, 0xc00448e3e0, 0x73aefc0, 0xc000192390, 0xc0081f4100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.692791  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.773975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.693076  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0516 02:14:48.712308  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.263916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.733269  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.197878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.733634  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0516 02:14:48.752211  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.152557ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.763432  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.763469  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.763629  107839 wrap.go:47] GET /healthz: (1.369242ms) 500
goroutine 11526 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009617340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009617340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000f0fd20, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b43da0, 0xc003e477c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b43da0, 0xc007cffc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b43da0, 0xc007cffb00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b43da0, 0xc007cffb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0041063c0, 0xc00448e3e0, 0x73aefc0, 0xc007b43da0, 0xc007cffb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:48.773218  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.153412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.773458  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0516 02:14:48.776326  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.776356  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.776500  107839 wrap.go:47] GET /healthz: (1.118688ms) 500
goroutine 11539 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009562e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009562e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004097220, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc003005468, 0xc00307af00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc003005468, 0xc00995d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc003005468, 0xc00995d200)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc003005468, 0xc00995d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0031ad920, 0xc00448e3e0, 0x73aefc0, 0xc003005468, 0xc00995d200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.792703  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.416964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.814124  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.068356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.814392  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0516 02:14:48.832453  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.263815ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.853721  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.680279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.854121  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0516 02:14:48.864891  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.864933  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.865124  107839 wrap.go:47] GET /healthz: (2.557266ms) 500
goroutine 11494 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00962ccb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00962ccb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002493c40, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000155b30, 0xc002569a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000155b30, 0xc007c4da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000155b30, 0xc007c4d900)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000155b30, 0xc007c4d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003f65800, 0xc00448e3e0, 0x73aefc0, 0xc000155b30, 0xc007c4d900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:48.872100  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.1862ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.876442  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.876471  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.876616  107839 wrap.go:47] GET /healthz: (1.244795ms) 500
goroutine 11534 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00963e5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00963e5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00264f9e0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc007b43f90, 0xc005f3e000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc007b43f90, 0xc0082afb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc007b43f90, 0xc0082afa00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc007b43f90, 0xc0082afa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003c63620, 0xc00448e3e0, 0x73aefc0, 0xc007b43f90, 0xc0082afa00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.893182  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.86364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.893409  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0516 02:14:48.913439  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.967757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.933282  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.322357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.933543  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0516 02:14:48.952311  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.206588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.962919  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.962982  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.963213  107839 wrap.go:47] GET /healthz: (1.121279ms) 500
goroutine 11496 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00962d2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00962d2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002260ec0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000155bf0, 0xc00307b540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2400)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000155bf0, 0xc0083d2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003eaac60, 0xc00448e3e0, 0x73aefc0, 0xc000155bf0, 0xc0083d2400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:48.973181  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.269904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.973407  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0516 02:14:48.976184  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:48.976210  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:48.976357  107839 wrap.go:47] GET /healthz: (997.365µs) 500
goroutine 11557 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0096072d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0096072d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001fa2780, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425afc8, 0xc0034d8dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7300)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425afc8, 0xc0081e7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0029c3680, 0xc00448e3e0, 0x73aefc0, 0xc00425afc8, 0xc0081e7300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:48.992194  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.241384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.031475  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (20.282878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.033012  107839 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0516 02:14:49.035241  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (2.012107ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.036805  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.204838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.057005  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.338134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.057252  107839 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0516 02:14:49.066347  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.066456  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.066719  107839 wrap.go:47] GET /healthz: (4.659259ms) 500
goroutine 11503 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00962ddc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00962ddc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001d654c0, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000155d88, 0xc005f94140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000155d88, 0xc0083d3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000155d88, 0xc0083d3900)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000155d88, 0xc0083d3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003eab8c0, 0xc00448e3e0, 0x73aefc0, 0xc000155d88, 0xc0083d3900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:49.072119  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.20942ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.075217  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.55634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.077656  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.077679  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.077861  107839 wrap.go:47] GET /healthz: (1.063986ms) 500
goroutine 11505 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00962df80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00962df80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001d65d00, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000155de8, 0xc005f948c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000155de8, 0xc008c9c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000155de8, 0xc008c9c400)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000155de8, 0xc008c9c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003eabd40, 0xc00448e3e0, 0x73aefc0, 0xc000155de8, 0xc008c9c400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.092995  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.783356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.093276  107839 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0516 02:14:49.112267  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.25576ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.113996  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.28116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.143557  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (12.594338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.144170  107839 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0516 02:14:49.163411  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.163455  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.163648  107839 wrap.go:47] GET /healthz: (1.476853ms) 500
goroutine 11561 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009607c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009607c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001fa3a20, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425b1b0, 0xc0034d9400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c100)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425b1b0, 0xc00915c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003b3ca20, 0xc00448e3e0, 0x73aefc0, 0xc00425b1b0, 0xc00915c100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:49.169566  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.171254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.171337  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.329669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.173892  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.177721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.174134  107839 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0516 02:14:49.176980  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.177006  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.177162  107839 wrap.go:47] GET /healthz: (1.597666ms) 500
goroutine 11619 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0096561c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0096561c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000c37460, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc000155e20, 0xc005f3ea00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc000155e20, 0xc008c9cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc000155e20, 0xc008c9ca00)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc000155e20, 0xc008c9ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006abe0c0, 0xc00448e3e0, 0x73aefc0, 0xc000155e20, 0xc008c9ca00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.193851  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (903.194µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.195804  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.548288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.213415  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.419292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.213662  107839 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0516 02:14:49.232339  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.396579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.234004  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.259152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.253302  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.255817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.253533  107839 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0516 02:14:49.263120  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.263228  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.263416  107839 wrap.go:47] GET /healthz: (1.350756ms) 500
goroutine 11569 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009684930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009684930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001222720, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425b6c8, 0xc005f3f180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425b6c8, 0xc009658200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425b6c8, 0xc009658100)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425b6c8, 0xc009658100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003b3da40, 0xc00448e3e0, 0x73aefc0, 0xc00425b6c8, 0xc009658100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:49.272266  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.380685ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.274033  107839 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.380617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.276453  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.276479  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.276645  107839 wrap.go:47] GET /healthz: (1.21456ms) 500
goroutine 11637 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009684d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009684d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001223500, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425b750, 0xc005f3f7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425b750, 0xc009658a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425b750, 0xc009658900)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425b750, 0xc009658900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005a881e0, 0xc00448e3e0, 0x73aefc0, 0xc00425b750, 0xc009658900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.294180  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.74398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.294438  107839 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0516 02:14:49.325566  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (2.564544ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.332522  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.485779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.336183  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.246487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.336420  107839 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0516 02:14:49.356452  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (3.365174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.364643  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (7.725575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.365288  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.365317  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.365479  107839 wrap.go:47] GET /healthz: (1.095749ms) 500
goroutine 11627 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00969e5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00969e5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0012bf500, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc0001ba6a8, 0xc00307bb80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f100)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc0001ba6a8, 0xc00967f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006abf3e0, 0xc00448e3e0, 0x73aefc0, 0xc0001ba6a8, 0xc00967f100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45692]
I0516 02:14:49.379391  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.379441  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.379521  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.124225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:49.379609  107839 wrap.go:47] GET /healthz: (2.5301ms) 500
goroutine 11512 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009632620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009632620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001460260, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc0001927a8, 0xc003e47e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2800)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc0001927a8, 0xc0096b2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006284060, 0xc00448e3e0, 0x73aefc0, 0xc0001927a8, 0xc0096b2800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.379800  107839 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0516 02:14:49.395446  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (4.047118ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.403249  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (7.346063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.417346  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.473139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.417587  107839 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0516 02:14:49.437070  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (6.041576ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.439305  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.748247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.453479  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.40066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.453735  107839 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0516 02:14:49.463793  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.463827  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.464067  107839 wrap.go:47] GET /healthz: (1.159336ms) 500
goroutine 11667 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0096aad20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0096aad20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00169bc20, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc0030b8528, 0xc005f3fe00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed000)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc0030b8528, 0xc0097ed000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005ff2300, 0xc00448e3e0, 0x73aefc0, 0xc0030b8528, 0xc0097ed000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:49.472056  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.107368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.473661  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.205964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.485103  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.485135  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.485300  107839 wrap.go:47] GET /healthz: (9.876846ms) 500
goroutine 11645 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0096d4690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0096d4690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001650140, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc00425bd08, 0xc005f94f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc00425bd08, 0xc00991c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc00425bd08, 0xc00991c400)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc00425bd08, 0xc00991c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005a89440, 0xc00448e3e0, 0x73aefc0, 0xc00425bd08, 0xc00991c400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.492858  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.910074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.493327  107839 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0516 02:14:49.512509  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.515413ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.515219  107839 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.16291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.533356  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.143278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.534375  107839 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0516 02:14:49.552325  107839 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.326406ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.554054  107839 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.256306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.563053  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.563080  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.563248  107839 wrap.go:47] GET /healthz: (1.18534ms) 500
goroutine 11521 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009633dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009633dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0017c2f20, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc0001929f8, 0xc005f95540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6500)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc0001929f8, 0xc0098b6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006284fc0, 0xc00448e3e0, 0x73aefc0, 0xc0001929f8, 0xc0098b6500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45646]
I0516 02:14:49.576323  107839 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 02:14:49.576360  107839 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 02:14:49.576501  107839 wrap.go:47] GET /healthz: (1.141252ms) 500
goroutine 11676 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0096ab6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0096ab6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002abd580, 0x1f4)
net/http.Error(0x7fe64ffd5100, 0xc0030b8700, 0xc002cae3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
net/http.HandlerFunc.ServeHTTP(0xc00491a980, 0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc006af0100, 0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc000c931f0, 0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc00512bd40, 0xc000c931f0, 0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
net/http.HandlerFunc.ServeHTTP(0xc00698f480, 0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
net/http.HandlerFunc.ServeHTTP(0xc0045c9530, 0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
net/http.HandlerFunc.ServeHTTP(0xc00698f4c0, 0x7fe64ffd5100, 0xc0030b8700, 0xc001131100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe64ffd5100, 0xc0030b8700, 0xc001131000)
net/http.HandlerFunc.ServeHTTP(0xc00761b3b0, 0x7fe64ffd5100, 0xc0030b8700, 0xc001131000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005ff3620, 0xc00448e3e0, 0x73aefc0, 0xc0030b8700, 0xc001131000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:49.584385  107839 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (12.816782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.584703  107839 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0516 02:14:49.663380  107839 wrap.go:47] GET /healthz: (1.22428ms) 200 [Go-http-client/1.1 127.0.0.1:45646]
W0516 02:14:49.664223  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 02:14:49.664260  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 02:14:49.664274  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 02:14:49.664286  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 02:14:49.664299  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 02:14:49.664311  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 02:14:49.664352  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 02:14:49.664380  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 02:14:49.664393  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 02:14:49.664444  107839 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0516 02:14:49.664472  107839 factory.go:337] Creating scheduler from algorithm provider 'DefaultProvider'
I0516 02:14:49.664483  107839 factory.go:418] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0516 02:14:49.665051  107839 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.665075  107839 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.665190  107839 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.665213  107839 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.665430  107839 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.665450  107839 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.665569  107839 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.665585  107839 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.665888  107839 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.665908  107839 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.666027  107839 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.666046  107839 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.666343  107839 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.666361  107839 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.666435  107839 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.666473  107839 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.666859  107839 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.666875  107839 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.667294  107839 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.667319  107839 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0516 02:14:49.667643  107839 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (442.415µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.667702  107839 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (519.038µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:49.668350  107839 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (412.12µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46002]
I0516 02:14:49.668488  107839 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (719.426µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46000]
I0516 02:14:49.668838  107839 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (370.08µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46006]
I0516 02:14:49.669346  107839 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (406.59µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46008]
I0516 02:14:49.669505  107839 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=22109 labels= fields= timeout=6m20s
I0516 02:14:49.669829  107839 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (380.883µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46010]
I0516 02:14:49.670198  107839 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=22109 labels= fields= timeout=5m12s
I0516 02:14:49.670577  107839 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (332.613µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:49.670786  107839 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (402.714µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:49.670930  107839 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=22109 labels= fields= timeout=8m14s
I0516 02:14:49.671208  107839 get.go:250] Starting watch for /api/v1/services, rv=22109 labels= fields= timeout=7m3s
I0516 02:14:49.671418  107839 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=22109 labels= fields= timeout=5m21s
I0516 02:14:49.671593  107839 get.go:250] Starting watch for /api/v1/nodes, rv=22109 labels= fields= timeout=8m30s
I0516 02:14:49.671616  107839 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=22109 labels= fields= timeout=5m51s
I0516 02:14:49.672429  107839 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (428.512µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46004]
I0516 02:14:49.672920  107839 get.go:250] Starting watch for /api/v1/pods, rv=22109 labels= fields= timeout=9m32s
I0516 02:14:49.673553  107839 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=22109 labels= fields= timeout=9m26s
I0516 02:14:49.675284  107839 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=22109 labels= fields= timeout=9m31s
I0516 02:14:49.676864  107839 wrap.go:47] GET /healthz: (1.011579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:49.678341  107839 wrap.go:47] GET /api/v1/namespaces/default: (1.151586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:49.680341  107839 wrap.go:47] POST /api/v1/namespaces: (1.650844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:49.681682  107839 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.014258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:49.685654  107839 wrap.go:47] POST /api/v1/namespaces/default/services: (3.499397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:49.686930  107839 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (910.762µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:49.689704  107839 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.201195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:49.765025  107839 shared_informer.go:176] caches populated
I0516 02:14:49.868446  107839 shared_informer.go:176] caches populated
I0516 02:14:49.968622  107839 shared_informer.go:176] caches populated
I0516 02:14:50.068862  107839 shared_informer.go:176] caches populated
I0516 02:14:50.169079  107839 shared_informer.go:176] caches populated
I0516 02:14:50.269315  107839 shared_informer.go:176] caches populated
I0516 02:14:50.369553  107839 shared_informer.go:176] caches populated
I0516 02:14:50.469789  107839 shared_informer.go:176] caches populated
I0516 02:14:50.570028  107839 shared_informer.go:176] caches populated
I0516 02:14:50.668694  107839 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 02:14:50.669039  107839 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 02:14:50.670234  107839 shared_informer.go:176] caches populated
I0516 02:14:50.670372  107839 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 02:14:50.670546  107839 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 02:14:50.671259  107839 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 02:14:50.674886  107839 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 02:14:50.770468  107839 shared_informer.go:176] caches populated
I0516 02:14:50.777981  107839 wrap.go:47] POST /api/v1/nodes: (6.924806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:50.783420  107839 wrap.go:47] POST /api/v1/nodes: (4.773178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:50.788243  107839 scheduling_queue.go:795] About to try and schedule pod unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:50.788271  107839 scheduler.go:452] Attempting to schedule pod: unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:50.788523  107839 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod", node "test-node-0"
I0516 02:14:50.788543  107839 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0516 02:14:50.788591  107839 factory.go:711] Attempting to bind test-pod to test-node-0
I0516 02:14:50.791650  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod/binding: (2.113711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:50.791868  107839 scheduler.go:589] pod unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod is bound successfully on node test-node-0, 2 nodes evaluated, 2 nodes were found feasible
I0516 02:14:50.794709  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/events: (2.348556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:50.795472  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods: (3.795165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:50.897771  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (1.516792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:50.905534  107839 wrap.go:47] DELETE /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (7.293576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:50.909422  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (2.350984ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:50.911711  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods: (1.654267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:50.912428  107839 scheduling_queue.go:795] About to try and schedule pod unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:50.912453  107839 scheduler.go:452] Attempting to schedule pod: unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:50.912678  107839 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod", node "test-node-1"
I0516 02:14:50.912699  107839 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod", node "test-node-1": all PVCs bound and nothing to do
E0516 02:14:50.912773  107839 framework.go:102] error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod
E0516 02:14:50.912790  107839 factory.go:662] Error scheduling unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod: error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod; retrying
I0516 02:14:50.912820  107839 factory.go:720] Updating pod condition for unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0516 02:14:50.920192  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (2.89318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:50.921783  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/events: (4.004075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:50.925446  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (2.27743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:50.925851  107839 wrap.go:47] PUT /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod/status: (8.430372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0516 02:14:50.932710  107839 wrap.go:47] DELETE /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (6.40217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:50.933728  107839 scheduling_queue.go:795] About to try and schedule pod unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:50.933821  107839 scheduler.go:448] Skip schedule deleting pod: unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:50.935285  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (989.133µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:50.937309  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods: (1.596784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:50.937646  107839 scheduling_queue.go:795] About to try and schedule pod unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:50.937681  107839 scheduler.go:452] Attempting to schedule pod: unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:50.937944  107839 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod", node "test-node-0"
I0516 02:14:50.937995  107839 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0516 02:14:50.938055  107839 framework.go:98] rejected by prebind-plugin at prebind: reject pod test-pod
E0516 02:14:50.938083  107839 factory.go:662] Error scheduling unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod: rejected by prebind-plugin at prebind: reject pod test-pod; retrying
I0516 02:14:50.938121  107839 factory.go:720] Updating pod condition for unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod to (PodScheduled==False, Reason=Unschedulable)
I0516 02:14:50.940861  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (2.341441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:50.941139  107839 wrap.go:47] PUT /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod/status: (2.745789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:50.943643  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/events: (3.273324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I0516 02:14:50.946868  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/events: (1.779912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:51.039707  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (1.685183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:51.043271  107839 scheduling_queue.go:795] About to try and schedule pod unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:51.043312  107839 scheduler.go:448] Skip schedule deleting pod: unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:51.046863  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/events: (3.290398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:51.049336  107839 wrap.go:47] DELETE /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (9.052082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:51.052384  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (1.489251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:51.056661  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods: (3.567569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:51.057120  107839 scheduling_queue.go:795] About to try and schedule pod unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:51.057145  107839 scheduler.go:452] Attempting to schedule pod: unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:51.057364  107839 scheduler_binder.go:256] AssumePodVolumes for pod "unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod", node "test-node-1"
I0516 02:14:51.057393  107839 scheduler_binder.go:266] AssumePodVolumes for pod "unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod", node "test-node-1": all PVCs bound and nothing to do
E0516 02:14:51.057444  107839 framework.go:102] error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod
E0516 02:14:51.057465  107839 factory.go:662] Error scheduling unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod: error while running prebind-plugin prebind plugin for pod test-pod: injecting failure for pod test-pod; retrying
I0516 02:14:51.057493  107839 factory.go:720] Updating pod condition for unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0516 02:14:51.060101  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/events: (1.653509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0516 02:14:51.062557  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (3.970264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:51.064510  107839 wrap.go:47] PUT /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod/status: (6.732399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46068]
I0516 02:14:51.068449  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (1.194387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:51.071972  107839 scheduling_queue.go:795] About to try and schedule pod unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:51.072017  107839 scheduler.go:448] Skip schedule deleting pod: unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod
I0516 02:14:51.074252  107839 wrap.go:47] POST /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/events: (1.982329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46072]
I0516 02:14:51.075433  107839 wrap.go:47] DELETE /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (6.499059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:51.079453  107839 wrap.go:47] GET /api/v1/namespaces/unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/pods/test-pod: (1.612437ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
E0516 02:14:51.080271  107839 scheduling_queue.go:798] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0516 02:14:51.080606  107839 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=22109&timeout=6m20s&timeoutSeconds=380&watch=true: (1.411343367s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46016]
I0516 02:14:51.080780  107839 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=22109&timeout=5m12s&timeoutSeconds=312&watch=true: (1.41084529s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46014]
I0516 02:14:51.080899  107839 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=22109&timeout=8m14s&timeoutSeconds=494&watch=true: (1.410337821s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46020]
I0516 02:14:51.081040  107839 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=22109&timeout=5m51s&timeoutSeconds=351&watch=true: (1.409633437s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45692]
I0516 02:14:51.081154  107839 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=22109&timeout=5m21s&timeoutSeconds=321&watch=true: (1.409933881s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46010]
I0516 02:14:51.081261  107839 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=22109&timeout=9m26s&timeoutSeconds=566&watch=true: (1.407939268s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46004]
I0516 02:14:51.081373  107839 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=22109&timeout=9m31s&timeoutSeconds=571&watch=true: (1.406369059s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46002]
I0516 02:14:51.081479  107839 wrap.go:47] GET /api/v1/services?resourceVersion=22109&timeout=7m3s&timeoutSeconds=423&watch=true: (1.41053741s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46006]
I0516 02:14:51.081596  107839 wrap.go:47] GET /api/v1/nodes?resourceVersion=22109&timeout=8m30s&timeoutSeconds=510&watch=true: (1.410223893s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45646]
I0516 02:14:51.081703  107839 wrap.go:47] GET /api/v1/pods?resourceVersion=22109&timeout=9m32s&timeoutSeconds=572&watch=true: (1.410126899s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46008]
I0516 02:14:51.090660  107839 wrap.go:47] DELETE /api/v1/nodes: (8.804526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:51.090851  107839 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0516 02:14:51.093281  107839 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.171611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0516 02:14:51.095975  107839 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.210979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
framework_test.go:377: test #1: Expected the unreserve plugin to be called 1 times, was called 0 times.
framework_test.go:385: test #2: Expected the unreserve plugin to be called 1 times, was called 2 times.
				from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190516-020812.xml

Find unreserve-pluginf77fd86c-19c6-42a6-bab5-e2d88b80b3d9/test-pod mentions in log files | View test history on testgrid


Show 1429 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 317 lines ...
W0516 01:59:50.762] I0516 01:59:50.761886   47831 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0516 01:59:50.763] I0516 01:59:50.761996   47831 server.go:558] external host was not specified, using 172.17.0.2
W0516 01:59:50.763] W0516 01:59:50.762010   47831 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0516 01:59:50.763] I0516 01:59:50.762631   47831 server.go:145] Version: v1.16.0-alpha.0.61+2ebd40964b8b67
W0516 01:59:51.121] I0516 01:59:51.120628   47831 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0516 01:59:51.121] I0516 01:59:51.120666   47831 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0516 01:59:51.122] E0516 01:59:51.121333   47831 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.123] E0516 01:59:51.121394   47831 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.123] E0516 01:59:51.121442   47831 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.123] E0516 01:59:51.121490   47831 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.124] E0516 01:59:51.121533   47831 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.124] E0516 01:59:51.121572   47831 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.124] E0516 01:59:51.121607   47831 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.125] E0516 01:59:51.121645   47831 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.125] E0516 01:59:51.121730   47831 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.125] E0516 01:59:51.121790   47831 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.126] E0516 01:59:51.121830   47831 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.126] E0516 01:59:51.121862   47831 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:51.126] I0516 01:59:51.121895   47831 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0516 01:59:51.127] I0516 01:59:51.121905   47831 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0516 01:59:51.127] I0516 01:59:51.124376   47831 client.go:354] parsed scheme: ""
W0516 01:59:51.127] I0516 01:59:51.124402   47831 client.go:354] scheme "" not registered, fallback to default scheme
W0516 01:59:51.127] I0516 01:59:51.124489   47831 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0516 01:59:51.128] I0516 01:59:51.124599   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0516 01:59:52.046] W0516 01:59:52.045977   47831 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0516 01:59:52.119] I0516 01:59:52.118754   47831 client.go:354] parsed scheme: ""
W0516 01:59:52.120] I0516 01:59:52.120172   47831 client.go:354] scheme "" not registered, fallback to default scheme
W0516 01:59:52.121] I0516 01:59:52.120875   47831 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0516 01:59:52.121] I0516 01:59:52.121593   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 01:59:52.123] I0516 01:59:52.122784   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 01:59:53.438] E0516 01:59:53.437571   47831 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.439] E0516 01:59:53.437652   47831 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.439] E0516 01:59:53.437742   47831 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.439] E0516 01:59:53.437780   47831 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.439] E0516 01:59:53.437839   47831 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.440] E0516 01:59:53.437984   47831 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.440] E0516 01:59:53.438018   47831 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.440] E0516 01:59:53.438042   47831 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.441] E0516 01:59:53.438119   47831 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.441] E0516 01:59:53.438225   47831 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.441] E0516 01:59:53.438274   47831 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.442] E0516 01:59:53.438320   47831 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 01:59:53.442] I0516 01:59:53.438371   47831 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0516 01:59:53.442] I0516 01:59:53.438390   47831 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0516 01:59:53.443] I0516 01:59:53.440703   47831 client.go:354] parsed scheme: ""
W0516 01:59:53.443] I0516 01:59:53.440771   47831 client.go:354] scheme "" not registered, fallback to default scheme
W0516 01:59:53.443] I0516 01:59:53.440888   47831 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0516 01:59:53.443] I0516 01:59:53.441656   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 50 lines ...
W0516 02:00:48.380] I0516 02:00:48.379086   51197 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-controller-manager...
W0516 02:00:48.392] I0516 02:00:48.391913   51197 leaderelection.go:227] successfully acquired lease kube-system/kube-controller-manager
W0516 02:00:48.397] I0516 02:00:48.393279   51197 event.go:258] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"b5b7c25f-9cef-455f-b54a-5ee37f1bc472", APIVersion:"v1", ResourceVersion:"152", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' c90e435f8436_1bd248e4-0897-41c3-89b7-7a460f441c23 became leader
I0516 02:00:48.498] +++ [0516 02:00:48] On try 3, controller-manager: ok
W0516 02:00:48.605] I0516 02:00:48.605378   51197 plugins.go:103] No cloud provider specified.
W0516 02:00:48.795] W0516 02:00:48.605451   51197 controllermanager.go:543] "serviceaccount-token" is disabled because there is no private key
W0516 02:00:48.796] E0516 02:00:48.606651   51197 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0516 02:00:48.796] W0516 02:00:48.606682   51197 controllermanager.go:515] Skipping "service"
W0516 02:00:48.796] I0516 02:00:48.607105   51197 controllermanager.go:523] Started "podgc"
W0516 02:00:48.797] I0516 02:00:48.607153   51197 gc_controller.go:76] Starting GC controller
W0516 02:00:48.797] I0516 02:00:48.607190   51197 controller_utils.go:1029] Waiting for caches to sync for GC controller
W0516 02:00:48.797] I0516 02:00:48.607820   51197 controllermanager.go:523] Started "job"
W0516 02:00:48.797] I0516 02:00:48.607991   51197 job_controller.go:143] Starting job controller
... skipping 89 lines ...
W0516 02:00:49.469] I0516 02:00:49.468235   51197 garbagecollector.go:130] Starting garbage collector controller
W0516 02:00:49.469] I0516 02:00:49.468263   51197 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0516 02:00:49.470] I0516 02:00:49.468470   51197 graph_builder.go:307] GraphBuilder running
W0516 02:00:49.470] I0516 02:00:49.468585   51197 stateful_set.go:145] Starting stateful set controller
W0516 02:00:49.470] I0516 02:00:49.468680   51197 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
W0516 02:00:49.470] I0516 02:00:49.468914   51197 node_lifecycle_controller.go:77] Sending events to api server
W0516 02:00:49.471] E0516 02:00:49.469212   51197 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0516 02:00:49.471] W0516 02:00:49.469289   51197 controllermanager.go:515] Skipping "cloud-node-lifecycle"
W0516 02:00:49.471] I0516 02:00:49.470044   51197 controllermanager.go:523] Started "persistentvolume-expander"
W0516 02:00:49.471] I0516 02:00:49.470929   51197 controllermanager.go:523] Started "daemonset"
W0516 02:00:49.472] I0516 02:00:49.470136   51197 expand_controller.go:153] Starting expand controller
W0516 02:00:49.472] I0516 02:00:49.471134   51197 controller_utils.go:1029] Waiting for caches to sync for expand controller
W0516 02:00:49.472] I0516 02:00:49.471208   51197 daemon_controller.go:267] Starting daemon sets controller
... skipping 8 lines ...
W0516 02:00:49.476] I0516 02:00:49.476628   51197 controllermanager.go:523] Started "serviceaccount"
W0516 02:00:49.477] I0516 02:00:49.476988   51197 serviceaccounts_controller.go:115] Starting service account controller
W0516 02:00:49.477] I0516 02:00:49.477267   51197 controller_utils.go:1029] Waiting for caches to sync for service account controller
W0516 02:00:49.479] I0516 02:00:49.479211   51197 controllermanager.go:523] Started "persistentvolume-binder"
W0516 02:00:49.482] I0516 02:00:49.482089   51197 pv_controller_base.go:271] Starting persistent volume controller
W0516 02:00:49.482] I0516 02:00:49.482201   51197 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
W0516 02:00:49.544] W0516 02:00:49.544013   51197 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0516 02:00:49.561] I0516 02:00:49.560755   51197 controller_utils.go:1036] Caches are synced for namespace controller
W0516 02:00:49.607] I0516 02:00:49.606924   51197 controller_utils.go:1036] Caches are synced for certificate controller
W0516 02:00:49.610] I0516 02:00:49.610475   51197 controller_utils.go:1036] Caches are synced for TTL controller
W0516 02:00:49.678] I0516 02:00:49.677765   51197 controller_utils.go:1036] Caches are synced for service account controller
W0516 02:00:49.682] I0516 02:00:49.681682   47831 controller.go:606] quota admission added evaluator for: serviceaccounts
I0516 02:00:49.786] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
... skipping 25 lines ...
W0516 02:00:50.124] I0516 02:00:50.004364   51197 node_lifecycle_controller.go:1009] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0516 02:00:50.124] I0516 02:00:50.004578   51197 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"769b7ae7-9550-49fe-8074-17cbd89e6267", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0516 02:00:50.125] I0516 02:00:50.007416   51197 controller_utils.go:1036] Caches are synced for GC controller
W0516 02:00:50.125] I0516 02:00:50.008211   51197 controller_utils.go:1036] Caches are synced for job controller
W0516 02:00:50.125] I0516 02:00:50.008776   51197 controller_utils.go:1036] Caches are synced for ReplicaSet controller
W0516 02:00:50.125] I0516 02:00:50.042104   51197 controller_utils.go:1036] Caches are synced for HPA controller
W0516 02:00:50.126] E0516 02:00:50.059264   51197 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0516 02:00:50.126] I0516 02:00:50.069054   51197 controller_utils.go:1036] Caches are synced for stateful set controller
W0516 02:00:50.126] I0516 02:00:50.073251   51197 controller_utils.go:1036] Caches are synced for daemon sets controller
W0516 02:00:50.126] I0516 02:00:50.074497   51197 controller_utils.go:1036] Caches are synced for PVC protection controller
W0516 02:00:50.127] I0516 02:00:50.075335   51197 controller_utils.go:1036] Caches are synced for disruption controller
W0516 02:00:50.127] I0516 02:00:50.075391   51197 disruption.go:294] Sending events to api server.
W0516 02:00:50.127] I0516 02:00:50.105681   51197 controller_utils.go:1036] Caches are synced for deployment controller
... skipping 68 lines ...
I0516 02:00:54.373] +++ working dir: /go/src/k8s.io/kubernetes
I0516 02:00:54.377] +++ command: run_RESTMapper_evaluation_tests
I0516 02:00:54.394] +++ [0516 02:00:54] Creating namespace namespace-1557972054-13279
I0516 02:00:54.619] namespace/namespace-1557972054-13279 created
I0516 02:00:54.753] Context "test" modified.
I0516 02:00:54.776] +++ [0516 02:00:54] Testing RESTMapper
I0516 02:00:54.924] +++ [0516 02:00:54] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0516 02:00:54.944] +++ exit code: 0
I0516 02:00:55.296] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0516 02:00:55.506] bindings                                                                      true         Binding
I0516 02:00:55.506] componentstatuses                 cs                                          false        ComponentStatus
I0516 02:00:55.506] configmaps                        cm                                          true         ConfigMap
I0516 02:00:55.507] endpoints                         ep                                          true         Endpoints
... skipping 640 lines ...
I0516 02:01:25.798] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:01:26.069] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:01:26.539] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:01:26.805] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:01:26.956] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:01:27.101] (Bpod "valid-pod" force deleted
W0516 02:01:27.202] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0516 02:01:27.281] error: setting 'all' parameter but found a non empty selector. 
W0516 02:01:27.281] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 02:01:27.382] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0516 02:01:27.485] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0516 02:01:27.627] (Bnamespace/test-kubectl-describe-pod created
I0516 02:01:27.772] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0516 02:01:27.913] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0516 02:01:29.325] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0516 02:01:29.463] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0516 02:01:29.552] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0516 02:01:29.670] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0516 02:01:29.879] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:01:30.173] (Bpod/env-test-pod created
W0516 02:01:30.273] error: min-available and max-unavailable cannot be both specified
I0516 02:01:30.501] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0516 02:01:30.502] Name:         env-test-pod
I0516 02:01:30.502] Namespace:    test-kubectl-describe-pod
I0516 02:01:30.502] Priority:     0
I0516 02:01:30.502] Node:         <none>
I0516 02:01:30.503] Labels:       <none>
... skipping 143 lines ...
I0516 02:01:47.077] (Bservice "modified" deleted
I0516 02:01:47.206] replicationcontroller "modified" deleted
I0516 02:01:47.633] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:01:47.891] (Bpod/valid-pod created
I0516 02:01:48.060] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:01:48.305] (BSuccessful
I0516 02:01:48.310] message:Error from server: cannot restore map from string
I0516 02:01:48.310] has:cannot restore map from string
W0516 02:01:48.411] E0516 02:01:48.292921   47831 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0516 02:01:48.511] Successful
I0516 02:01:48.512] message:pod/valid-pod patched (no change)
I0516 02:01:48.512] has:patched (no change)
I0516 02:01:48.577] pod/valid-pod patched
I0516 02:01:48.770] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 02:01:48.923] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I0516 02:01:49.643] (Bpod/valid-pod patched
I0516 02:01:49.805] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0516 02:01:49.935] (Bpod/valid-pod patched
I0516 02:01:50.080] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0516 02:01:50.393] (Bpod/valid-pod patched
I0516 02:01:50.550] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 02:01:50.834] (B+++ [0516 02:01:50] "kubectl patch with resourceVersion 515" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0516 02:01:51.293] pod "valid-pod" deleted
I0516 02:01:51.313] pod/valid-pod replaced
I0516 02:01:51.498] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0516 02:01:51.773] (BSuccessful
I0516 02:01:51.773] message:error: --grace-period must have --force specified
I0516 02:01:51.774] has:\-\-grace-period must have \-\-force specified
I0516 02:01:52.039] Successful
I0516 02:01:52.040] message:error: --timeout must have --force specified
I0516 02:01:52.040] has:\-\-timeout must have \-\-force specified
I0516 02:01:52.328] node/node-v1-test created
W0516 02:01:52.429] W0516 02:01:52.327835   51197 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0516 02:01:52.618] node/node-v1-test replaced
I0516 02:01:52.776] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0516 02:01:52.911] (Bnode "node-v1-test" deleted
I0516 02:01:53.069] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 02:01:53.707] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0516 02:01:55.429] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 16 lines ...
I0516 02:01:55.747]     name: kubernetes-pause
I0516 02:01:55.747] has:localonlyvalue
I0516 02:01:55.816] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0516 02:01:56.076] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0516 02:01:56.212] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0516 02:01:56.355] (Bpod/valid-pod labeled
W0516 02:01:56.456] error: 'name' already has a value (valid-pod), and --overwrite is false
I0516 02:01:56.557] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0516 02:01:56.681] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:01:56.825] (Bpod "valid-pod" force deleted
W0516 02:01:56.926] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 02:01:57.026] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:01:57.027] (B+++ [0516 02:01:56] Creating namespace namespace-1557972116-30281
... skipping 82 lines ...
I0516 02:02:07.561] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0516 02:02:07.564] +++ working dir: /go/src/k8s.io/kubernetes
I0516 02:02:07.566] +++ command: run_kubectl_create_error_tests
I0516 02:02:07.579] +++ [0516 02:02:07] Creating namespace namespace-1557972127-24513
I0516 02:02:07.860] namespace/namespace-1557972127-24513 created
I0516 02:02:07.965] Context "test" modified.
I0516 02:02:07.974] +++ [0516 02:02:07] Testing kubectl create with error
W0516 02:02:08.075] Error: must specify one of -f and -k
W0516 02:02:08.364] 
W0516 02:02:08.365] Create a resource from a file or from stdin.
W0516 02:02:08.365] 
W0516 02:02:08.366]  JSON and YAML formats are accepted.
W0516 02:02:08.366] 
W0516 02:02:08.366] Examples:
... skipping 41 lines ...
W0516 02:02:08.375] 
W0516 02:02:08.375] Usage:
W0516 02:02:08.375]   kubectl create -f FILENAME [options]
W0516 02:02:08.375] 
W0516 02:02:08.375] Use "kubectl <command> --help" for more information about a given command.
W0516 02:02:08.376] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0516 02:02:08.476] +++ [0516 02:02:08] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0516 02:02:08.577] kubectl convert is DEPRECATED and will be removed in a future version.
W0516 02:02:08.924] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0516 02:02:09.025] +++ exit code: 0
I0516 02:02:09.025] Recording: run_kubectl_apply_tests
I0516 02:02:09.025] Running command: run_kubectl_apply_tests
I0516 02:02:09.025] 
... skipping 20 lines ...
W0516 02:02:13.415] I0516 02:02:13.414150   47831 client.go:354] scheme "" not registered, fallback to default scheme
W0516 02:02:13.415] I0516 02:02:13.414201   47831 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0516 02:02:13.416] I0516 02:02:13.414281   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:02:13.416] I0516 02:02:13.415973   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:02:13.418] I0516 02:02:13.418124   47831 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0516 02:02:13.519] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0516 02:02:13.619] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0516 02:02:13.726] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0516 02:02:13.772] +++ exit code: 0
I0516 02:02:13.814] Recording: run_kubectl_run_tests
I0516 02:02:13.815] Running command: run_kubectl_run_tests
I0516 02:02:13.842] 
I0516 02:02:13.846] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 95 lines ...
I0516 02:02:17.622] Context "test" modified.
I0516 02:02:17.631] +++ [0516 02:02:17] Testing kubectl create filter
I0516 02:02:17.864] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:02:18.154] (Bpod/selector-test-pod created
I0516 02:02:18.323] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0516 02:02:18.454] (BSuccessful
I0516 02:02:18.454] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0516 02:02:18.455] has:pods "selector-test-pod-dont-apply" not found
I0516 02:02:18.569] pod "selector-test-pod" deleted
I0516 02:02:18.593] +++ exit code: 0
I0516 02:02:18.637] Recording: run_kubectl_apply_deployments_tests
I0516 02:02:18.638] Running command: run_kubectl_apply_deployments_tests
I0516 02:02:18.666] 
... skipping 39 lines ...
W0516 02:02:22.887] I0516 02:02:22.796794   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972138-5318", Name:"nginx", UID:"e3105c96-e7dc-4caa-a6ce-39692b1a9a08", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8c9ccf86d to 3
W0516 02:02:22.888] I0516 02:02:22.806506   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972138-5318", Name:"nginx-8c9ccf86d", UID:"05c70180-3e69-45f4-8f0c-5c3cd0e8dd6d", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-qrrh7
W0516 02:02:22.888] I0516 02:02:22.823393   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972138-5318", Name:"nginx-8c9ccf86d", UID:"05c70180-3e69-45f4-8f0c-5c3cd0e8dd6d", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-xzhlf
W0516 02:02:22.888] I0516 02:02:22.823933   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972138-5318", Name:"nginx-8c9ccf86d", UID:"05c70180-3e69-45f4-8f0c-5c3cd0e8dd6d", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-4d7mn
I0516 02:02:23.006] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0516 02:02:27.448] (BSuccessful
I0516 02:02:27.449] message:Error from server (Conflict): error when applying patch:
I0516 02:02:27.450] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557972138-5318\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0516 02:02:27.450] to:
I0516 02:02:27.450] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0516 02:02:27.450] Name: "nginx", Namespace: "namespace-1557972138-5318"
I0516 02:02:27.454] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557972138-5318\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-05-16T02:02:22Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-05-16T02:02:22Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-05-16T02:02:22Z"]] "name":"nginx" "namespace":"namespace-1557972138-5318" "resourceVersion":"630" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1557972138-5318/deployments/nginx" "uid":"e3105c96-e7dc-4caa-a6ce-39692b1a9a08"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-05-16T02:02:22Z" "lastUpdateTime":"2019-05-16T02:02:22Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0516 02:02:27.454] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0516 02:02:27.454] has:Error from server (Conflict)
W0516 02:02:31.933] E0516 02:02:31.933219   51197 replica_set.go:450] Sync "namespace-1557972138-5318/nginx-8c9ccf86d" failed with replicasets.apps "nginx-8c9ccf86d" not found
I0516 02:02:32.850] deployment.extensions/nginx configured
W0516 02:02:32.952] I0516 02:02:32.863475   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972138-5318", Name:"nginx", UID:"576490ad-910a-450b-ac66-d44acb390c0c", APIVersion:"apps/v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0516 02:02:32.952] I0516 02:02:32.864411   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972138-5318", Name:"nginx-86bb9b4d9f", UID:"57728884-f014-45b7-a72e-17a3770b809f", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-c24cd
W0516 02:02:32.953] I0516 02:02:32.878011   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972138-5318", Name:"nginx-86bb9b4d9f", UID:"57728884-f014-45b7-a72e-17a3770b809f", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-zlwnm
W0516 02:02:32.954] I0516 02:02:32.884901   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972138-5318", Name:"nginx-86bb9b4d9f", UID:"57728884-f014-45b7-a72e-17a3770b809f", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-9c97w
I0516 02:02:33.054] Successful
I0516 02:02:33.055] message:        "name": "nginx2"
I0516 02:02:33.055]           "name": "nginx2"
I0516 02:02:33.055] has:"name": "nginx2"
W0516 02:02:38.152] E0516 02:02:38.151787   51197 replica_set.go:450] Sync "namespace-1557972138-5318/nginx-86bb9b4d9f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-86bb9b4d9f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557972138-5318/nginx-86bb9b4d9f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 57728884-f014-45b7-a72e-17a3770b809f, UID in object meta: 
I0516 02:02:39.125] Successful
I0516 02:02:39.125] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
I0516 02:02:39.125] has:Invalid value
W0516 02:02:39.226] I0516 02:02:39.096338   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972138-5318", Name:"nginx", UID:"9ff6c82e-9bd9-438f-b6a5-b13fbc6d33c6", APIVersion:"apps/v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0516 02:02:39.227] I0516 02:02:39.104801   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972138-5318", Name:"nginx-86bb9b4d9f", UID:"4d2f1db4-bdd8-480a-87bc-7f2bc4113505", APIVersion:"apps/v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-pwlkl
W0516 02:02:39.227] I0516 02:02:39.114657   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972138-5318", Name:"nginx-86bb9b4d9f", UID:"4d2f1db4-bdd8-480a-87bc-7f2bc4113505", APIVersion:"apps/v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-k2c64
... skipping 159 lines ...
I0516 02:02:42.499] +++ [0516 02:02:42] Creating namespace namespace-1557972162-20648
I0516 02:02:42.605] namespace/namespace-1557972162-20648 created
I0516 02:02:42.710] Context "test" modified.
I0516 02:02:42.728] +++ [0516 02:02:42] Testing kubectl get
I0516 02:02:42.863] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:02:43.011] (BSuccessful
I0516 02:02:43.011] message:Error from server (NotFound): pods "abc" not found
I0516 02:02:43.012] has:pods "abc" not found
I0516 02:02:43.142] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:02:43.274] (BSuccessful
I0516 02:02:43.274] message:Error from server (NotFound): pods "abc" not found
I0516 02:02:43.275] has:pods "abc" not found
I0516 02:02:43.411] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:02:43.543] (BSuccessful
I0516 02:02:43.543] message:{
I0516 02:02:43.544]     "apiVersion": "v1",
I0516 02:02:43.544]     "items": [],
... skipping 23 lines ...
I0516 02:02:44.084] has not:No resources found
I0516 02:02:44.531] Successful
I0516 02:02:44.531] message:NAME
I0516 02:02:44.531] has not:No resources found
I0516 02:02:44.671] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:02:44.837] (BSuccessful
I0516 02:02:44.837] message:error: the server doesn't have a resource type "foobar"
I0516 02:02:44.837] has not:No resources found
I0516 02:02:44.970] Successful
I0516 02:02:44.971] message:No resources found.
I0516 02:02:44.971] has:No resources found
I0516 02:02:45.141] Successful
I0516 02:02:45.141] message:
I0516 02:02:45.141] has not:No resources found
I0516 02:02:45.278] Successful
I0516 02:02:45.279] message:No resources found.
I0516 02:02:45.280] has:No resources found
I0516 02:02:45.415] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:02:45.552] (BSuccessful
I0516 02:02:45.552] message:Error from server (NotFound): pods "abc" not found
I0516 02:02:45.553] has:pods "abc" not found
I0516 02:02:45.554] FAIL!
I0516 02:02:45.555] message:Error from server (NotFound): pods "abc" not found
I0516 02:02:45.555] has not:List
I0516 02:02:45.555] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0516 02:02:45.730] Successful
I0516 02:02:45.730] message:I0516 02:02:45.654270   61455 loader.go:359] Config loaded from file:  /tmp/tmp.hReFcf0sTR/.kube/config
I0516 02:02:45.730] I0516 02:02:45.656004   61455 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0516 02:02:45.731] I0516 02:02:45.699386   61455 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 3 milliseconds
... skipping 888 lines ...
I0516 02:02:51.732] Successful
I0516 02:02:51.836] message:NAME    DATA   AGE
I0516 02:02:51.836] one     0      0s
I0516 02:02:51.836] three   0      0s
I0516 02:02:51.837] two     0      0s
I0516 02:02:51.837] STATUS    REASON          MESSAGE
I0516 02:02:51.837] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 02:02:51.837] has not:watch is only supported on individual resources
I0516 02:02:52.854] Successful
I0516 02:02:52.854] message:STATUS    REASON          MESSAGE
I0516 02:02:52.854] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 02:02:52.855] has not:watch is only supported on individual resources
I0516 02:02:52.862] +++ [0516 02:02:52] Creating namespace namespace-1557972172-29485
I0516 02:02:52.982] namespace/namespace-1557972172-29485 created
I0516 02:02:53.096] Context "test" modified.
I0516 02:02:53.242] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:02:53.518] (Bpod/valid-pod created
... skipping 104 lines ...
I0516 02:02:53.674] }
I0516 02:02:53.800] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:02:54.185] (B<no value>Successful
I0516 02:02:54.185] message:valid-pod:
I0516 02:02:54.185] has:valid-pod:
I0516 02:02:54.315] Successful
I0516 02:02:54.315] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0516 02:02:54.315] 	template was:
I0516 02:02:54.315] 		{.missing}
I0516 02:02:54.316] 	object given to jsonpath engine was:
I0516 02:02:54.318] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-05-16T02:02:53Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-05-16T02:02:53Z"}}, "name":"valid-pod", "namespace":"namespace-1557972172-29485", "resourceVersion":"730", "selfLink":"/api/v1/namespaces/namespace-1557972172-29485/pods/valid-pod", "uid":"6e42fc98-75c4-45b3-b191-8b3c9f76d753"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0516 02:02:54.318] has:missing is not found
W0516 02:02:54.430] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0516 02:02:54.530] Successful
I0516 02:02:54.531] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0516 02:02:54.531] 	template was:
I0516 02:02:54.531] 		{{.missing}}
I0516 02:02:54.532] 	raw data was:
I0516 02:02:54.533] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-05-16T02:02:53Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-05-16T02:02:53Z"}],"name":"valid-pod","namespace":"namespace-1557972172-29485","resourceVersion":"730","selfLink":"/api/v1/namespaces/namespace-1557972172-29485/pods/valid-pod","uid":"6e42fc98-75c4-45b3-b191-8b3c9f76d753"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0516 02:02:54.533] 	object given to template engine was:
I0516 02:02:54.535] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-05-16T02:02:53Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-05-16T02:02:53Z]] name:valid-pod namespace:namespace-1557972172-29485 resourceVersion:730 selfLink:/api/v1/namespaces/namespace-1557972172-29485/pods/valid-pod uid:6e42fc98-75c4-45b3-b191-8b3c9f76d753] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0516 02:02:54.535] has:map has no entry for key "missing"
I0516 02:02:55.569] Successful
I0516 02:02:55.569] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 02:02:55.569] valid-pod   0/1     Pending   0          1s
I0516 02:02:55.570] STATUS      REASON          MESSAGE
I0516 02:02:55.570] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 02:02:55.570] has:STATUS
I0516 02:02:55.573] Successful
I0516 02:02:55.574] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 02:02:55.574] valid-pod   0/1     Pending   0          1s
I0516 02:02:55.574] STATUS      REASON          MESSAGE
I0516 02:02:55.574] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 02:02:55.575] has:valid-pod
I0516 02:02:56.726] Successful
I0516 02:02:56.726] message:pod/valid-pod
I0516 02:02:56.726] has not:STATUS
I0516 02:02:56.728] Successful
I0516 02:02:56.729] message:pod/valid-pod
... skipping 142 lines ...
I0516 02:02:57.882]   terminationGracePeriodSeconds: 30
I0516 02:02:57.882] status:
I0516 02:02:57.883]   phase: Pending
I0516 02:02:57.883]   qosClass: Guaranteed
I0516 02:02:57.883] has:name: valid-pod
I0516 02:02:58.037] Successful
I0516 02:02:58.037] message:Error from server (NotFound): pods "invalid-pod" not found
I0516 02:02:58.038] has:"invalid-pod" not found
I0516 02:02:58.163] pod "valid-pod" deleted
I0516 02:02:58.331] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:02:58.607] (Bpod/redis-master created
I0516 02:02:58.614] pod/valid-pod created
I0516 02:02:58.761] Successful
... skipping 283 lines ...
I0516 02:03:06.343] +++ command: run_kubectl_exec_pod_tests
I0516 02:03:06.357] +++ [0516 02:03:06] Creating namespace namespace-1557972186-5195
I0516 02:03:06.451] namespace/namespace-1557972186-5195 created
I0516 02:03:06.547] Context "test" modified.
I0516 02:03:06.555] +++ [0516 02:03:06] Testing kubectl exec POD COMMAND
I0516 02:03:06.647] Successful
I0516 02:03:06.648] message:Error from server (NotFound): pods "abc" not found
I0516 02:03:06.648] has:pods "abc" not found
I0516 02:03:06.866] pod/test-pod created
I0516 02:03:07.001] Successful
I0516 02:03:07.001] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 02:03:07.001] has not:pods "test-pod" not found
I0516 02:03:07.003] Successful
I0516 02:03:07.004] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 02:03:07.004] has not:pod or type/name must be specified
I0516 02:03:07.101] pod "test-pod" deleted
I0516 02:03:07.129] +++ exit code: 0
I0516 02:03:07.544] Recording: run_kubectl_exec_resource_name_tests
I0516 02:03:07.544] Running command: run_kubectl_exec_resource_name_tests
I0516 02:03:07.570] 
... skipping 2 lines ...
I0516 02:03:07.579] +++ command: run_kubectl_exec_resource_name_tests
I0516 02:03:07.592] +++ [0516 02:03:07] Creating namespace namespace-1557972187-24364
I0516 02:03:07.676] namespace/namespace-1557972187-24364 created
I0516 02:03:07.780] Context "test" modified.
I0516 02:03:07.793] +++ [0516 02:03:07] Testing kubectl exec TYPE/NAME COMMAND
I0516 02:03:07.928] Successful
I0516 02:03:07.928] message:error: the server doesn't have a resource type "foo"
I0516 02:03:07.928] has:error:
I0516 02:03:08.049] Successful
I0516 02:03:08.049] message:Error from server (NotFound): deployments.extensions "bar" not found
I0516 02:03:08.050] has:"bar" not found
I0516 02:03:08.259] pod/test-pod created
I0516 02:03:08.491] replicaset.apps/frontend created
W0516 02:03:08.591] I0516 02:03:08.497139   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972187-24364", Name:"frontend", UID:"48a01734-d76b-4feb-9e79-f4dc4bf4c6ae", APIVersion:"apps/v1", ResourceVersion:"848", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mpv6s
W0516 02:03:08.592] I0516 02:03:08.501984   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972187-24364", Name:"frontend", UID:"48a01734-d76b-4feb-9e79-f4dc4bf4c6ae", APIVersion:"apps/v1", ResourceVersion:"848", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ksfmf
W0516 02:03:08.593] I0516 02:03:08.506238   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972187-24364", Name:"frontend", UID:"48a01734-d76b-4feb-9e79-f4dc4bf4c6ae", APIVersion:"apps/v1", ResourceVersion:"848", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2d7rg
I0516 02:03:08.711] configmap/test-set-env-config created
I0516 02:03:08.819] Successful
I0516 02:03:08.820] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0516 02:03:08.820] has:not implemented
I0516 02:03:08.917] Successful
I0516 02:03:08.918] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 02:03:08.918] has not:not found
I0516 02:03:08.919] Successful
I0516 02:03:08.920] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 02:03:08.920] has not:pod or type/name must be specified
I0516 02:03:09.035] Successful
I0516 02:03:09.035] message:Error from server (BadRequest): pod frontend-2d7rg does not have a host assigned
I0516 02:03:09.035] has not:not found
I0516 02:03:09.038] Successful
I0516 02:03:09.038] message:Error from server (BadRequest): pod frontend-2d7rg does not have a host assigned
I0516 02:03:09.038] has not:pod or type/name must be specified
I0516 02:03:09.131] pod "test-pod" deleted
I0516 02:03:09.229] replicaset.extensions "frontend" deleted
I0516 02:03:09.315] configmap "test-set-env-config" deleted
I0516 02:03:09.336] +++ exit code: 0
I0516 02:03:09.370] Recording: run_create_secret_tests
I0516 02:03:09.371] Running command: run_create_secret_tests
I0516 02:03:09.393] 
I0516 02:03:09.395] +++ Running case: test-cmd.run_create_secret_tests 
I0516 02:03:09.398] +++ working dir: /go/src/k8s.io/kubernetes
I0516 02:03:09.401] +++ command: run_create_secret_tests
I0516 02:03:09.522] Successful
I0516 02:03:09.522] message:Error from server (NotFound): secrets "mysecret" not found
I0516 02:03:09.522] has:secrets "mysecret" not found
I0516 02:03:09.703] Successful
I0516 02:03:09.703] message:Error from server (NotFound): secrets "mysecret" not found
I0516 02:03:09.703] has:secrets "mysecret" not found
I0516 02:03:09.705] Successful
I0516 02:03:09.705] message:user-specified
I0516 02:03:09.706] has:user-specified
I0516 02:03:09.799] Successful
I0516 02:03:09.895] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"b83c2c3e-2732-4f18-bbc2-b65436dcb8ca","resourceVersion":"869","creationTimestamp":"2019-05-16T02:03:09Z"}}
... skipping 164 lines ...
I0516 02:03:13.169] valid-pod   0/1     Pending   0          1s
I0516 02:03:13.169] has:valid-pod
I0516 02:03:14.270] Successful
I0516 02:03:14.270] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 02:03:14.270] valid-pod   0/1     Pending   0          1s
I0516 02:03:14.271] STATUS      REASON          MESSAGE
I0516 02:03:14.271] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 02:03:14.271] has:Timeout exceeded while reading body
I0516 02:03:14.365] Successful
I0516 02:03:14.365] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 02:03:14.365] valid-pod   0/1     Pending   0          2s
I0516 02:03:14.365] has:valid-pod
I0516 02:03:14.437] Successful
I0516 02:03:14.438] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0516 02:03:14.438] has:Invalid timeout value
I0516 02:03:14.519] pod "valid-pod" deleted
I0516 02:03:14.540] +++ exit code: 0
I0516 02:03:14.654] Recording: run_crd_tests
I0516 02:03:14.654] Running command: run_crd_tests
I0516 02:03:14.679] 
... skipping 248 lines ...
W0516 02:03:22.025] I0516 02:03:21.947761   47831 client.go:354] scheme "" not registered, fallback to default scheme
W0516 02:03:22.025] I0516 02:03:21.948323   47831 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0516 02:03:22.025] I0516 02:03:21.948780   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:03:22.025] I0516 02:03:21.950591   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:03:22.047] I0516 02:03:22.046893   51197 controller_utils.go:1036] Caches are synced for resource quota controller
I0516 02:03:22.148] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0516 02:03:22.294] (B+++ [0516 02:03:22] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0516 02:03:22.393] {
I0516 02:03:22.393]     "apiVersion": "company.com/v1",
I0516 02:03:22.394]     "kind": "Foo",
I0516 02:03:22.394]     "metadata": {
I0516 02:03:22.394]         "annotations": {
I0516 02:03:22.394]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 321 lines ...
I0516 02:03:34.688] (Bnamespace/non-native-resources created
I0516 02:03:34.965] bar.company.com/test created
I0516 02:03:35.125] crd.sh:456: Successful get bars {{len .items}}: 1
I0516 02:03:35.240] (Bnamespace "non-native-resources" deleted
I0516 02:03:40.578] crd.sh:459: Successful get bars {{len .items}}: 0
I0516 02:03:40.835] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0516 02:03:40.935] Error from server (NotFound): namespaces "non-native-resources" not found
I0516 02:03:41.050] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0516 02:03:41.141] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0516 02:03:41.329] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0516 02:03:41.366] +++ exit code: 0
I0516 02:03:41.426] Recording: run_cmd_with_img_tests
I0516 02:03:41.427] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0516 02:03:41.875] I0516 02:03:41.875086   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972221-2545", Name:"test1-7b9c75bcb9", UID:"0df9875d-dcc9-4090-b2ee-51cfdff08975", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-7b9c75bcb9-wqqbb
I0516 02:03:41.992] Successful
I0516 02:03:41.992] message:deployment.apps/test1 created
I0516 02:03:41.992] has:deployment.apps/test1 created
I0516 02:03:42.037] deployment.extensions "test1" deleted
I0516 02:03:42.160] Successful
I0516 02:03:42.161] message:error: Invalid image name "InvalidImageName": invalid reference format
I0516 02:03:42.161] has:error: Invalid image name "InvalidImageName": invalid reference format
I0516 02:03:42.176] +++ exit code: 0
I0516 02:03:42.257] +++ [0516 02:03:42] Testing recursive resources
I0516 02:03:42.263] +++ [0516 02:03:42] Creating namespace namespace-1557972222-12422
I0516 02:03:42.382] namespace/namespace-1557972222-12422 created
I0516 02:03:42.494] Context "test" modified.
I0516 02:03:42.641] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:03:43.053] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:43.056] (BSuccessful
I0516 02:03:43.057] message:pod/busybox0 created
I0516 02:03:43.057] pod/busybox1 created
I0516 02:03:43.058] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 02:03:43.058] has:error validating data: kind not set
I0516 02:03:43.200] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:43.503] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0516 02:03:43.507] (BSuccessful
I0516 02:03:43.508] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:43.509] has:Object 'Kind' is missing
I0516 02:03:43.696] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:44.122] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0516 02:03:44.126] (BSuccessful
I0516 02:03:44.126] message:pod/busybox0 replaced
I0516 02:03:44.126] pod/busybox1 replaced
I0516 02:03:44.127] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 02:03:44.127] has:error validating data: kind not set
I0516 02:03:44.261] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:44.404] (BSuccessful
I0516 02:03:44.405] message:Name:         busybox0
I0516 02:03:44.406] Namespace:    namespace-1557972222-12422
I0516 02:03:44.406] Priority:     0
I0516 02:03:44.406] Node:         <none>
... skipping 153 lines ...
I0516 02:03:44.442] has:Object 'Kind' is missing
I0516 02:03:44.585] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:44.891] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0516 02:03:44.895] (BSuccessful
I0516 02:03:44.895] message:pod/busybox0 annotated
I0516 02:03:44.896] pod/busybox1 annotated
I0516 02:03:44.897] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:44.897] has:Object 'Kind' is missing
I0516 02:03:45.030] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:45.485] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0516 02:03:45.490] (BSuccessful
I0516 02:03:45.491] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0516 02:03:45.491] pod/busybox0 configured
I0516 02:03:45.491] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0516 02:03:45.491] pod/busybox1 configured
I0516 02:03:45.492] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 02:03:45.493] has:error validating data: kind not set
W0516 02:03:45.593] I0516 02:03:45.407330   51197 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0516 02:03:45.694] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:03:45.901] (Bdeployment.apps/nginx created
W0516 02:03:46.019] I0516 02:03:45.909629   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972222-12422", Name:"nginx", UID:"948d299d-febd-4af5-a69c-bdf8f2d0e74d", APIVersion:"apps/v1", ResourceVersion:"1048", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-958dc566b to 3
W0516 02:03:46.020] I0516 02:03:45.926705   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972222-12422", Name:"nginx-958dc566b", UID:"e870ec84-afd9-4d92-bca1-8a4c85bfde10", APIVersion:"apps/v1", ResourceVersion:"1049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-dfb6v
W0516 02:03:46.020] I0516 02:03:45.934001   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972222-12422", Name:"nginx-958dc566b", UID:"e870ec84-afd9-4d92-bca1-8a4c85bfde10", APIVersion:"apps/v1", ResourceVersion:"1049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-khf9c
... skipping 49 lines ...
I0516 02:03:46.763] deployment.extensions "nginx" deleted
I0516 02:03:46.838] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:47.126] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:47.130] (BSuccessful
I0516 02:03:47.131] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0516 02:03:47.131] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0516 02:03:47.132] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:47.132] has:Object 'Kind' is missing
I0516 02:03:47.284] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:47.419] (BSuccessful
I0516 02:03:47.420] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:47.420] has:busybox0:busybox1:
I0516 02:03:47.423] Successful
I0516 02:03:47.424] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:47.425] has:Object 'Kind' is missing
I0516 02:03:47.567] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:47.705] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:47.859] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0516 02:03:47.861] (BSuccessful
I0516 02:03:47.862] message:pod/busybox0 labeled
I0516 02:03:47.862] pod/busybox1 labeled
I0516 02:03:47.863] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:47.863] has:Object 'Kind' is missing
I0516 02:03:48.029] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:48.170] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:48.313] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0516 02:03:48.317] (BSuccessful
I0516 02:03:48.317] message:pod/busybox0 patched
I0516 02:03:48.317] pod/busybox1 patched
I0516 02:03:48.318] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:48.318] has:Object 'Kind' is missing
I0516 02:03:48.455] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:48.715] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:03:48.717] (BSuccessful
I0516 02:03:48.718] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 02:03:48.718] pod "busybox0" force deleted
I0516 02:03:48.718] pod "busybox1" force deleted
I0516 02:03:48.718] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 02:03:48.719] has:Object 'Kind' is missing
I0516 02:03:48.863] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:03:49.133] (Breplicationcontroller/busybox0 created
I0516 02:03:49.188] replicationcontroller/busybox1 created
W0516 02:03:49.288] I0516 02:03:49.139988   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972222-12422", Name:"busybox0", UID:"4cbe94c8-728f-4dc4-8b8f-522d91a122f0", APIVersion:"v1", ResourceVersion:"1080", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-tmrqm
W0516 02:03:49.358] I0516 02:03:49.160048   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972222-12422", Name:"busybox1", UID:"d891f3de-a0e4-4332-af5e-58069279518c", APIVersion:"v1", ResourceVersion:"1084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-9r67m
W0516 02:03:49.358] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 02:03:49.459] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:49.526] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:49.665] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0516 02:03:49.797] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0516 02:03:50.080] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0516 02:03:50.220] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0516 02:03:50.222] (BSuccessful
I0516 02:03:50.222] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0516 02:03:50.223] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0516 02:03:50.223] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:50.223] has:Object 'Kind' is missing
I0516 02:03:50.339] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0516 02:03:50.464] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0516 02:03:50.621] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:50.760] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0516 02:03:50.918] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0516 02:03:51.218] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0516 02:03:51.350] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0516 02:03:51.354] (BSuccessful
I0516 02:03:51.355] message:service/busybox0 exposed
I0516 02:03:51.356] service/busybox1 exposed
I0516 02:03:51.357] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:51.357] has:Object 'Kind' is missing
I0516 02:03:51.500] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:51.634] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0516 02:03:51.776] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0516 02:03:52.129] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0516 02:03:52.269] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0516 02:03:52.272] (BSuccessful
I0516 02:03:52.272] message:replicationcontroller/busybox0 scaled
I0516 02:03:52.272] replicationcontroller/busybox1 scaled
I0516 02:03:52.273] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:52.274] has:Object 'Kind' is missing
W0516 02:03:52.375] I0516 02:03:51.929940   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972222-12422", Name:"busybox0", UID:"4cbe94c8-728f-4dc4-8b8f-522d91a122f0", APIVersion:"v1", ResourceVersion:"1102", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-dt689
W0516 02:03:52.375] I0516 02:03:51.981996   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972222-12422", Name:"busybox1", UID:"d891f3de-a0e4-4332-af5e-58069279518c", APIVersion:"v1", ResourceVersion:"1107", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-cc8f8
W0516 02:03:52.376] I0516 02:03:52.248853   51197 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0516 02:03:52.376] I0516 02:03:52.349640   51197 controller_utils.go:1036] Caches are synced for resource quota controller
I0516 02:03:52.477] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:52.681] (Bgeneric-resources.sh:381: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:03:52.685] (BSuccessful
I0516 02:03:52.686] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 02:03:52.686] replicationcontroller "busybox0" force deleted
I0516 02:03:52.686] replicationcontroller "busybox1" force deleted
I0516 02:03:52.687] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:52.688] has:Object 'Kind' is missing
W0516 02:03:52.788] I0516 02:03:52.784833   51197 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0516 02:03:52.885] I0516 02:03:52.885139   51197 controller_utils.go:1036] Caches are synced for garbage collector controller
I0516 02:03:52.986] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:03:53.088] (Bdeployment.apps/nginx1-deployment created
I0516 02:03:53.096] deployment.apps/nginx0-deployment created
W0516 02:03:53.196] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0516 02:03:53.197] I0516 02:03:53.102600   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972222-12422", Name:"nginx1-deployment", UID:"d8ee8f71-e8ba-4a42-86f6-c0a5eed1d08c", APIVersion:"apps/v1", ResourceVersion:"1123", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-67c99bcc6b to 2
W0516 02:03:53.198] I0516 02:03:53.115215   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972222-12422", Name:"nginx1-deployment-67c99bcc6b", UID:"8a8a56a2-7b4e-47c8-9a97-6f61922cdfc8", APIVersion:"apps/v1", ResourceVersion:"1125", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-67c99bcc6b-jjgmz
W0516 02:03:53.198] I0516 02:03:53.120777   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972222-12422", Name:"nginx0-deployment", UID:"5b20e3bd-becf-4a9b-a94c-dbde9ab735c6", APIVersion:"apps/v1", ResourceVersion:"1124", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-5886cf98fc to 2
W0516 02:03:53.198] I0516 02:03:53.123002   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972222-12422", Name:"nginx1-deployment-67c99bcc6b", UID:"8a8a56a2-7b4e-47c8-9a97-6f61922cdfc8", APIVersion:"apps/v1", ResourceVersion:"1125", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-67c99bcc6b-c74j5
W0516 02:03:53.199] I0516 02:03:53.136085   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972222-12422", Name:"nginx0-deployment-5886cf98fc", UID:"38a7b89c-4fdd-4c7f-a46e-af422d422125", APIVersion:"apps/v1", ResourceVersion:"1127", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5886cf98fc-4llsq
W0516 02:03:53.200] I0516 02:03:53.145195   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972222-12422", Name:"nginx0-deployment-5886cf98fc", UID:"38a7b89c-4fdd-4c7f-a46e-af422d422125", APIVersion:"apps/v1", ResourceVersion:"1127", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5886cf98fc-k2m6t
I0516 02:03:53.326] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0516 02:03:53.470] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0516 02:03:53.796] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0516 02:03:53.799] (BSuccessful
I0516 02:03:53.800] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0516 02:03:53.800] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0516 02:03:53.801] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 02:03:53.802] has:Object 'Kind' is missing
I0516 02:03:53.954] deployment.apps/nginx1-deployment paused
I0516 02:03:53.966] deployment.apps/nginx0-deployment paused
I0516 02:03:54.157] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0516 02:03:54.162] (BSuccessful
I0516 02:03:54.163] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0516 02:03:54.668] 1         <none>
I0516 02:03:54.668] 
I0516 02:03:54.668] deployment.apps/nginx0-deployment 
I0516 02:03:54.668] REVISION  CHANGE-CAUSE
I0516 02:03:54.668] 1         <none>
I0516 02:03:54.668] 
I0516 02:03:54.669] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 02:03:54.670] has:nginx0-deployment
I0516 02:03:54.670] Successful
I0516 02:03:54.670] message:deployment.apps/nginx1-deployment 
I0516 02:03:54.670] REVISION  CHANGE-CAUSE
I0516 02:03:54.671] 1         <none>
I0516 02:03:54.671] 
I0516 02:03:54.671] deployment.apps/nginx0-deployment 
I0516 02:03:54.671] REVISION  CHANGE-CAUSE
I0516 02:03:54.671] 1         <none>
I0516 02:03:54.671] 
I0516 02:03:54.672] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 02:03:54.672] has:nginx1-deployment
I0516 02:03:54.673] Successful
I0516 02:03:54.673] message:deployment.apps/nginx1-deployment 
I0516 02:03:54.674] REVISION  CHANGE-CAUSE
I0516 02:03:54.674] 1         <none>
I0516 02:03:54.674] 
I0516 02:03:54.674] deployment.apps/nginx0-deployment 
I0516 02:03:54.674] REVISION  CHANGE-CAUSE
I0516 02:03:54.675] 1         <none>
I0516 02:03:54.675] 
I0516 02:03:54.675] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 02:03:54.676] has:Object 'Kind' is missing
W0516 02:03:54.778] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0516 02:03:54.816] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 02:03:54.917] deployment.apps "nginx1-deployment" force deleted
I0516 02:03:54.917] deployment.apps "nginx0-deployment" force deleted
I0516 02:03:56.050] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:03:56.324] (Breplicationcontroller/busybox0 created
I0516 02:03:56.336] replicationcontroller/busybox1 created
W0516 02:03:56.437] I0516 02:03:56.333611   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972222-12422", Name:"busybox0", UID:"9b9d6a8a-544e-40aa-80d6-19fff03ca141", APIVersion:"v1", ResourceVersion:"1172", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-278wv
W0516 02:03:56.437] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0516 02:03:56.438] I0516 02:03:56.343585   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972222-12422", Name:"busybox1", UID:"50710377-578f-4ce9-ac16-828313c3bd67", APIVersion:"v1", ResourceVersion:"1173", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xwgl2
I0516 02:03:56.539] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 02:03:56.645] (BSuccessful
I0516 02:03:56.646] message:no rollbacker has been implemented for "ReplicationController"
I0516 02:03:56.646] no rollbacker has been implemented for "ReplicationController"
I0516 02:03:56.647] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0516 02:03:56.648] message:no rollbacker has been implemented for "ReplicationController"
I0516 02:03:56.649] no rollbacker has been implemented for "ReplicationController"
I0516 02:03:56.650] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:56.650] has:Object 'Kind' is missing
I0516 02:03:56.844] Successful
I0516 02:03:56.845] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:56.845] error: replicationcontrollers "busybox0" pausing is not supported
I0516 02:03:56.845] error: replicationcontrollers "busybox1" pausing is not supported
I0516 02:03:56.845] has:Object 'Kind' is missing
I0516 02:03:56.849] Successful
I0516 02:03:56.850] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:56.850] error: replicationcontrollers "busybox0" pausing is not supported
I0516 02:03:56.851] error: replicationcontrollers "busybox1" pausing is not supported
I0516 02:03:56.852] has:replicationcontrollers "busybox0" pausing is not supported
I0516 02:03:56.855] Successful
I0516 02:03:56.856] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:56.857] error: replicationcontrollers "busybox0" pausing is not supported
I0516 02:03:56.857] error: replicationcontrollers "busybox1" pausing is not supported
I0516 02:03:56.858] has:replicationcontrollers "busybox1" pausing is not supported
I0516 02:03:57.023] Successful
I0516 02:03:57.024] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:57.025] error: replicationcontrollers "busybox0" resuming is not supported
I0516 02:03:57.025] error: replicationcontrollers "busybox1" resuming is not supported
I0516 02:03:57.026] has:Object 'Kind' is missing
I0516 02:03:57.029] Successful
I0516 02:03:57.030] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:57.030] error: replicationcontrollers "busybox0" resuming is not supported
I0516 02:03:57.031] error: replicationcontrollers "busybox1" resuming is not supported
I0516 02:03:57.032] has:replicationcontrollers "busybox0" resuming is not supported
I0516 02:03:57.035] Successful
I0516 02:03:57.036] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:57.036] error: replicationcontrollers "busybox0" resuming is not supported
I0516 02:03:57.037] error: replicationcontrollers "busybox1" resuming is not supported
I0516 02:03:57.037] has:replicationcontrollers "busybox0" resuming is not supported
W0516 02:03:57.142] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0516 02:03:57.175] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 02:03:57.276] replicationcontroller "busybox0" force deleted
I0516 02:03:57.277] replicationcontroller "busybox1" force deleted
I0516 02:03:58.194] Recording: run_namespace_tests
I0516 02:03:58.194] Running command: run_namespace_tests
I0516 02:03:58.220] 
I0516 02:03:58.223] +++ Running case: test-cmd.run_namespace_tests 
... skipping 2 lines ...
I0516 02:03:58.244] +++ [0516 02:03:58] Testing kubectl(v1:namespaces)
I0516 02:03:58.355] namespace/my-namespace created
I0516 02:03:58.552] core.sh:1321: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0516 02:03:58.681] (Bnamespace "my-namespace" deleted
I0516 02:04:03.801] namespace/my-namespace condition met
I0516 02:04:03.938] Successful
I0516 02:04:03.939] message:Error from server (NotFound): namespaces "my-namespace" not found
I0516 02:04:03.939] has: not found
I0516 02:04:04.062] namespace/my-namespace created
I0516 02:04:04.203] core.sh:1330: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0516 02:04:04.510] (BSuccessful
I0516 02:04:04.511] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0516 02:04:04.511] namespace "kube-node-lease" deleted
... skipping 30 lines ...
I0516 02:04:04.517] namespace "namespace-1557972191-29405" deleted
I0516 02:04:04.518] namespace "namespace-1557972192-18829" deleted
I0516 02:04:04.518] namespace "namespace-1557972194-14135" deleted
I0516 02:04:04.518] namespace "namespace-1557972196-487" deleted
I0516 02:04:04.518] namespace "namespace-1557972221-2545" deleted
I0516 02:04:04.520] namespace "namespace-1557972222-12422" deleted
I0516 02:04:04.520] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0516 02:04:04.520] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0516 02:04:04.520] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0516 02:04:04.521] has:warning: deleting cluster-scoped resources
I0516 02:04:04.521] Successful
I0516 02:04:04.521] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0516 02:04:04.521] namespace "kube-node-lease" deleted
I0516 02:04:04.521] namespace "my-namespace" deleted
I0516 02:04:04.522] namespace "namespace-1557972051-31466" deleted
... skipping 28 lines ...
I0516 02:04:04.528] namespace "namespace-1557972191-29405" deleted
I0516 02:04:04.528] namespace "namespace-1557972192-18829" deleted
I0516 02:04:04.529] namespace "namespace-1557972194-14135" deleted
I0516 02:04:04.529] namespace "namespace-1557972196-487" deleted
I0516 02:04:04.529] namespace "namespace-1557972221-2545" deleted
I0516 02:04:04.529] namespace "namespace-1557972222-12422" deleted
I0516 02:04:04.529] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0516 02:04:04.530] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0516 02:04:04.530] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0516 02:04:04.530] has:namespace "my-namespace" deleted
I0516 02:04:04.691] core.sh:1342: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0516 02:04:04.811] (Bnamespace/other created
W0516 02:04:04.916] I0516 02:04:04.916170   51197 horizontal.go:320] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1557972222-12422
W0516 02:04:04.922] I0516 02:04:04.921592   51197 horizontal.go:320] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1557972222-12422
I0516 02:04:05.022] core.sh:1346: Successful get namespaces/other {{.metadata.name}}: other
I0516 02:04:05.102] (Bcore.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:04:05.374] (Bpod/valid-pod created
I0516 02:04:05.533] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:04:05.673] (Bcore.sh:1356: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:04:05.809] (BSuccessful
I0516 02:04:05.810] message:error: a resource cannot be retrieved by name across all namespaces
I0516 02:04:05.810] has:a resource cannot be retrieved by name across all namespaces
I0516 02:04:05.965] core.sh:1363: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 02:04:06.086] (Bpod "valid-pod" force deleted
W0516 02:04:06.187] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 02:04:06.288] core.sh:1367: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:04:06.353] (Bnamespace "other" deleted
... skipping 149 lines ...
I0516 02:04:30.301] +++ command: run_client_config_tests
I0516 02:04:30.314] +++ [0516 02:04:30] Creating namespace namespace-1557972270-13477
I0516 02:04:30.425] namespace/namespace-1557972270-13477 created
I0516 02:04:30.524] Context "test" modified.
I0516 02:04:30.534] +++ [0516 02:04:30] Testing client config
I0516 02:04:30.624] Successful
I0516 02:04:30.625] message:error: stat missing: no such file or directory
I0516 02:04:30.625] has:missing: no such file or directory
I0516 02:04:30.704] Successful
I0516 02:04:30.704] message:error: stat missing: no such file or directory
I0516 02:04:30.704] has:missing: no such file or directory
I0516 02:04:30.788] Successful
I0516 02:04:30.789] message:error: stat missing: no such file or directory
I0516 02:04:30.789] has:missing: no such file or directory
I0516 02:04:30.874] Successful
I0516 02:04:30.875] message:Error in configuration: context was not found for specified context: missing-context
I0516 02:04:30.875] has:context was not found for specified context: missing-context
I0516 02:04:30.960] Successful
I0516 02:04:30.961] message:error: no server found for cluster "missing-cluster"
I0516 02:04:30.961] has:no server found for cluster "missing-cluster"
I0516 02:04:31.047] Successful
I0516 02:04:31.047] message:error: auth info "missing-user" does not exist
I0516 02:04:31.047] has:auth info "missing-user" does not exist
I0516 02:04:31.207] Successful
I0516 02:04:31.208] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0516 02:04:31.208] has:Error loading config file
I0516 02:04:31.318] Successful
I0516 02:04:31.318] message:error: stat missing-config: no such file or directory
I0516 02:04:31.318] has:no such file or directory
I0516 02:04:31.336] +++ exit code: 0
I0516 02:04:31.626] Recording: run_service_accounts_tests
I0516 02:04:31.626] Running command: run_service_accounts_tests
I0516 02:04:31.649] 
I0516 02:04:31.651] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 37 lines ...
I0516 02:04:39.625] Labels:                        run=pi
I0516 02:04:39.626] Annotations:                   <none>
I0516 02:04:39.626] Schedule:                      59 23 31 2 *
I0516 02:04:39.626] Concurrency Policy:            Allow
I0516 02:04:39.626] Suspend:                       False
I0516 02:04:39.626] Successful Job History Limit:  3
I0516 02:04:39.627] Failed Job History Limit:      1
I0516 02:04:39.627] Starting Deadline Seconds:     <unset>
I0516 02:04:39.627] Selector:                      <unset>
I0516 02:04:39.627] Parallelism:                   <unset>
I0516 02:04:39.627] Completions:                   <unset>
I0516 02:04:39.627] Pod Template:
I0516 02:04:39.628]   Labels:  run=pi
... skipping 36 lines ...
I0516 02:04:40.379]                 run=pi
I0516 02:04:40.379] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0516 02:04:40.380] Controlled By:  CronJob/pi
I0516 02:04:40.380] Parallelism:    1
I0516 02:04:40.380] Completions:    1
I0516 02:04:40.380] Start Time:     Thu, 16 May 2019 02:04:39 +0000
I0516 02:04:40.380] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0516 02:04:40.380] Pod Template:
I0516 02:04:40.381]   Labels:  controller-uid=449b3e70-9c4a-47b6-b5ea-a3e0ef8dd18f
I0516 02:04:40.381]            job-name=test-job
I0516 02:04:40.381]            run=pi
I0516 02:04:40.381]   Containers:
I0516 02:04:40.381]    pi:
... skipping 388 lines ...
I0516 02:04:54.108]   selector:
I0516 02:04:54.108]     role: padawan
I0516 02:04:54.108]   sessionAffinity: None
I0516 02:04:54.108]   type: ClusterIP
I0516 02:04:54.108] status:
I0516 02:04:54.109]   loadBalancer: {}
W0516 02:04:54.209] error: you must specify resources by --filename when --local is set.
W0516 02:04:54.209] Example resource specifications include:
W0516 02:04:54.210]    '-f rsrc.yaml'
W0516 02:04:54.210]    '--filename=rsrc.json'
I0516 02:04:54.351] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0516 02:04:54.617] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0516 02:04:54.744] (Bservice "redis-master" deleted
... skipping 107 lines ...
I0516 02:05:06.607] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 02:05:06.767] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0516 02:05:06.955] (Bdaemonset.extensions/bind rolled back
I0516 02:05:07.153] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0516 02:05:07.341] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 02:05:07.517] (BSuccessful
I0516 02:05:07.517] message:error: unable to find specified revision 1000000 in history
I0516 02:05:07.517] has:unable to find specified revision
I0516 02:05:07.686] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0516 02:05:07.893] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 02:05:08.103] (Bdaemonset.extensions/bind rolled back
I0516 02:05:08.267] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0516 02:05:08.414] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 28 lines ...
I0516 02:05:10.718] Namespace:    namespace-1557972308-15587
I0516 02:05:10.718] Selector:     app=guestbook,tier=frontend
I0516 02:05:10.718] Labels:       app=guestbook
I0516 02:05:10.719]               tier=frontend
I0516 02:05:10.719] Annotations:  <none>
I0516 02:05:10.719] Replicas:     3 current / 3 desired
I0516 02:05:10.720] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:10.720] Pod Template:
I0516 02:05:10.720]   Labels:  app=guestbook
I0516 02:05:10.720]            tier=frontend
I0516 02:05:10.720]   Containers:
I0516 02:05:10.720]    php-redis:
I0516 02:05:10.720]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0516 02:05:10.904] Namespace:    namespace-1557972308-15587
I0516 02:05:10.905] Selector:     app=guestbook,tier=frontend
I0516 02:05:10.905] Labels:       app=guestbook
I0516 02:05:10.905]               tier=frontend
I0516 02:05:10.905] Annotations:  <none>
I0516 02:05:10.905] Replicas:     3 current / 3 desired
I0516 02:05:10.906] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:10.906] Pod Template:
I0516 02:05:10.906]   Labels:  app=guestbook
I0516 02:05:10.906]            tier=frontend
I0516 02:05:10.906]   Containers:
I0516 02:05:10.906]    php-redis:
I0516 02:05:10.907]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0516 02:05:11.090] Namespace:    namespace-1557972308-15587
I0516 02:05:11.090] Selector:     app=guestbook,tier=frontend
I0516 02:05:11.090] Labels:       app=guestbook
I0516 02:05:11.090]               tier=frontend
I0516 02:05:11.090] Annotations:  <none>
I0516 02:05:11.091] Replicas:     3 current / 3 desired
I0516 02:05:11.091] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:11.091] Pod Template:
I0516 02:05:11.091]   Labels:  app=guestbook
I0516 02:05:11.091]            tier=frontend
I0516 02:05:11.092]   Containers:
I0516 02:05:11.092]    php-redis:
I0516 02:05:11.092]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0516 02:05:11.276] Namespace:    namespace-1557972308-15587
I0516 02:05:11.276] Selector:     app=guestbook,tier=frontend
I0516 02:05:11.277] Labels:       app=guestbook
I0516 02:05:11.277]               tier=frontend
I0516 02:05:11.277] Annotations:  <none>
I0516 02:05:11.277] Replicas:     3 current / 3 desired
I0516 02:05:11.277] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:11.278] Pod Template:
I0516 02:05:11.278]   Labels:  app=guestbook
I0516 02:05:11.278]            tier=frontend
I0516 02:05:11.278]   Containers:
I0516 02:05:11.278]    php-redis:
I0516 02:05:11.279]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0516 02:05:11.501] Namespace:    namespace-1557972308-15587
I0516 02:05:11.502] Selector:     app=guestbook,tier=frontend
I0516 02:05:11.502] Labels:       app=guestbook
I0516 02:05:11.502]               tier=frontend
I0516 02:05:11.502] Annotations:  <none>
I0516 02:05:11.503] Replicas:     3 current / 3 desired
I0516 02:05:11.503] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:11.503] Pod Template:
I0516 02:05:11.503]   Labels:  app=guestbook
I0516 02:05:11.503]            tier=frontend
I0516 02:05:11.504]   Containers:
I0516 02:05:11.504]    php-redis:
I0516 02:05:11.504]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0516 02:05:11.679] Namespace:    namespace-1557972308-15587
I0516 02:05:11.679] Selector:     app=guestbook,tier=frontend
I0516 02:05:11.679] Labels:       app=guestbook
I0516 02:05:11.679]               tier=frontend
I0516 02:05:11.680] Annotations:  <none>
I0516 02:05:11.680] Replicas:     3 current / 3 desired
I0516 02:05:11.680] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:11.680] Pod Template:
I0516 02:05:11.680]   Labels:  app=guestbook
I0516 02:05:11.681]            tier=frontend
I0516 02:05:11.681]   Containers:
I0516 02:05:11.681]    php-redis:
I0516 02:05:11.681]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0516 02:05:11.868] Namespace:    namespace-1557972308-15587
I0516 02:05:11.868] Selector:     app=guestbook,tier=frontend
I0516 02:05:11.868] Labels:       app=guestbook
I0516 02:05:11.869]               tier=frontend
I0516 02:05:11.869] Annotations:  <none>
I0516 02:05:11.869] Replicas:     3 current / 3 desired
I0516 02:05:11.869] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:11.869] Pod Template:
I0516 02:05:11.870]   Labels:  app=guestbook
I0516 02:05:11.870]            tier=frontend
I0516 02:05:11.870]   Containers:
I0516 02:05:11.870]    php-redis:
I0516 02:05:11.870]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0516 02:05:12.065] Namespace:    namespace-1557972308-15587
I0516 02:05:12.065] Selector:     app=guestbook,tier=frontend
I0516 02:05:12.065] Labels:       app=guestbook
I0516 02:05:12.065]               tier=frontend
I0516 02:05:12.066] Annotations:  <none>
I0516 02:05:12.066] Replicas:     3 current / 3 desired
I0516 02:05:12.066] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:12.066] Pod Template:
I0516 02:05:12.066]   Labels:  app=guestbook
I0516 02:05:12.067]            tier=frontend
I0516 02:05:12.067]   Containers:
I0516 02:05:12.067]    php-redis:
I0516 02:05:12.067]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
W0516 02:05:12.483] I0516 02:05:12.393772   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972308-15587", Name:"frontend", UID:"a305d6a0-cb4d-4add-bb47-c7c112e0a842", APIVersion:"v1", ResourceVersion:"1708", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-jghrp
I0516 02:05:12.583] core.sh:1071: Successful get rc frontend {{.spec.replicas}}: 2
I0516 02:05:12.680] (Bcore.sh:1075: Successful get rc frontend {{.spec.replicas}}: 2
I0516 02:05:12.996] (Bcore.sh:1079: Successful get rc frontend {{.spec.replicas}}: 2
I0516 02:05:13.150] (Bcore.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I0516 02:05:13.306] (Breplicationcontroller/frontend scaled
W0516 02:05:13.406] error: Expected replicas to be 3, was 2
W0516 02:05:13.568] I0516 02:05:13.316243   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972308-15587", Name:"frontend", UID:"a305d6a0-cb4d-4add-bb47-c7c112e0a842", APIVersion:"v1", ResourceVersion:"1715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-m4j9l
I0516 02:05:13.669] core.sh:1087: Successful get rc frontend {{.spec.replicas}}: 3
I0516 02:05:13.752] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 3
I0516 02:05:13.907] (Breplicationcontroller/frontend scaled
W0516 02:05:14.008] I0516 02:05:13.911565   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972308-15587", Name:"frontend", UID:"a305d6a0-cb4d-4add-bb47-c7c112e0a842", APIVersion:"v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-m4j9l
I0516 02:05:14.109] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
... skipping 41 lines ...
I0516 02:05:17.823] service "expose-test-deployment" deleted
I0516 02:05:18.013] Successful
I0516 02:05:18.014] message:service/expose-test-deployment exposed
I0516 02:05:18.014] has:service/expose-test-deployment exposed
I0516 02:05:18.095] service "expose-test-deployment" deleted
I0516 02:05:18.229] Successful
I0516 02:05:18.229] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0516 02:05:18.229] See 'kubectl expose -h' for help and examples
I0516 02:05:18.229] has:invalid deployment: no selectors
I0516 02:05:18.397] Successful
I0516 02:05:18.397] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0516 02:05:18.397] See 'kubectl expose -h' for help and examples
I0516 02:05:18.397] has:invalid deployment: no selectors
I0516 02:05:18.775] deployment.apps/nginx-deployment created
W0516 02:05:18.918] I0516 02:05:18.775754   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-5cb597d4f", UID:"bf4a393e-30fb-48b2-ae2a-6764b478e49a", APIVersion:"apps/v1", ResourceVersion:"1837", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5cb597d4f-rrtdl
W0516 02:05:18.919] I0516 02:05:18.776873   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment", UID:"ab0de3f3-e2f6-454a-9067-84b780d51817", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5cb597d4f to 3
W0516 02:05:18.919] I0516 02:05:18.783763   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-5cb597d4f", UID:"bf4a393e-30fb-48b2-ae2a-6764b478e49a", APIVersion:"apps/v1", ResourceVersion:"1837", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5cb597d4f-rp6rs
... skipping 23 lines ...
I0516 02:05:21.453] service "frontend" deleted
I0516 02:05:21.461] service "frontend-2" deleted
I0516 02:05:21.468] service "frontend-3" deleted
I0516 02:05:21.478] service "frontend-4" deleted
I0516 02:05:21.487] service "frontend-5" deleted
I0516 02:05:21.618] Successful
I0516 02:05:21.619] message:error: cannot expose a Node
I0516 02:05:21.619] has:cannot expose
I0516 02:05:21.711] Successful
I0516 02:05:21.711] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0516 02:05:21.711] has:metadata.name: Invalid value
I0516 02:05:21.818] Successful
I0516 02:05:21.819] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0516 02:05:24.515] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 02:05:24.626] core.sh:1259: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0516 02:05:24.734] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0516 02:05:24.853] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 02:05:24.953] core.sh:1263: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0516 02:05:25.121] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0516 02:05:25.253] Error: required flag(s) "max" not set
W0516 02:05:25.253] 
W0516 02:05:25.254] 
W0516 02:05:25.254] Examples:
W0516 02:05:25.254]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0516 02:05:25.254]   kubectl autoscale deployment foo --min=2 --max=10
W0516 02:05:25.255]   
... skipping 55 lines ...
I0516 02:05:25.531]           limits:
I0516 02:05:25.532]             cpu: 300m
I0516 02:05:25.532]           requests:
I0516 02:05:25.532]             cpu: 300m
I0516 02:05:25.532]       terminationGracePeriodSeconds: 0
I0516 02:05:25.532] status: {}
W0516 02:05:25.633] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0516 02:05:25.821] deployment.apps/nginx-deployment-resources created
W0516 02:05:25.922] I0516 02:05:25.830559   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources", UID:"0be8b14e-993e-4827-ad4c-c6ba26bfe90d", APIVersion:"apps/v1", ResourceVersion:"1978", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-865b6bb7c6 to 3
W0516 02:05:25.923] I0516 02:05:25.837688   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources-865b6bb7c6", UID:"4b498141-d41e-45b3-9b89-1b7a4f82b0b6", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-9qxck
W0516 02:05:25.924] I0516 02:05:25.844696   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources-865b6bb7c6", UID:"4b498141-d41e-45b3-9b89-1b7a4f82b0b6", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-6gvcj
W0516 02:05:25.924] I0516 02:05:25.846530   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources-865b6bb7c6", UID:"4b498141-d41e-45b3-9b89-1b7a4f82b0b6", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-f4xjt
I0516 02:05:26.025] core.sh:1278: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0516 02:05:26.285] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0516 02:05:26.386] I0516 02:05:26.294015   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources", UID:"0be8b14e-993e-4827-ad4c-c6ba26bfe90d", APIVersion:"apps/v1", ResourceVersion:"1992", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69b4c96c9b to 1
W0516 02:05:26.387] I0516 02:05:26.300759   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources-69b4c96c9b", UID:"03ce001a-da63-4216-a170-83ead619446a", APIVersion:"apps/v1", ResourceVersion:"1993", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69b4c96c9b-p2nx8
I0516 02:05:26.487] core.sh:1283: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0516 02:05:26.516] (Bcore.sh:1284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0516 02:05:26.778] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0516 02:05:26.879] error: unable to find container named redis
W0516 02:05:26.879] I0516 02:05:26.811036   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources", UID:"0be8b14e-993e-4827-ad4c-c6ba26bfe90d", APIVersion:"apps/v1", ResourceVersion:"2001", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-865b6bb7c6 to 2
W0516 02:05:26.880] I0516 02:05:26.819814   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources-865b6bb7c6", UID:"4b498141-d41e-45b3-9b89-1b7a4f82b0b6", APIVersion:"apps/v1", ResourceVersion:"2005", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-865b6bb7c6-f4xjt
W0516 02:05:26.880] I0516 02:05:26.857172   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources", UID:"0be8b14e-993e-4827-ad4c-c6ba26bfe90d", APIVersion:"apps/v1", ResourceVersion:"2004", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-7bb7d84c58 to 1
W0516 02:05:26.881] I0516 02:05:26.861527   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972308-15587", Name:"nginx-deployment-resources-7bb7d84c58", UID:"5d6a6561-3b44-4349-8d12-72f9d66b6606", APIVersion:"apps/v1", ResourceVersion:"2011", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-7bb7d84c58-rl7vw
I0516 02:05:26.981] core.sh:1289: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0516 02:05:27.050] (Bcore.sh:1290: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 211 lines ...
I0516 02:05:27.657]     status: "True"
I0516 02:05:27.657]     type: Progressing
I0516 02:05:27.657]   observedGeneration: 4
I0516 02:05:27.657]   replicas: 4
I0516 02:05:27.657]   unavailableReplicas: 4
I0516 02:05:27.657]   updatedReplicas: 1
W0516 02:05:27.758] error: you must specify resources by --filename when --local is set.
W0516 02:05:27.758] Example resource specifications include:
W0516 02:05:27.758]    '-f rsrc.yaml'
W0516 02:05:27.758]    '--filename=rsrc.json'
I0516 02:05:27.859] core.sh:1299: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0516 02:05:27.936] (Bcore.sh:1300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0516 02:05:28.032] (Bcore.sh:1301: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0516 02:05:29.888]                 pod-template-hash=75c7695cbd
I0516 02:05:29.889] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0516 02:05:29.889]                 deployment.kubernetes.io/max-replicas: 2
I0516 02:05:29.889]                 deployment.kubernetes.io/revision: 1
I0516 02:05:29.889] Controlled By:  Deployment/test-nginx-apps
I0516 02:05:29.890] Replicas:       1 current / 1 desired
I0516 02:05:29.890] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:29.890] Pod Template:
I0516 02:05:29.890]   Labels:  app=test-nginx-apps
I0516 02:05:29.891]            pod-template-hash=75c7695cbd
I0516 02:05:29.891]   Containers:
I0516 02:05:29.891]    nginx:
I0516 02:05:29.891]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 90 lines ...
I0516 02:05:34.705] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0516 02:05:34.838] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 02:05:34.966] (Bdeployment.extensions/nginx rolled back
I0516 02:05:36.079] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 02:05:36.295] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 02:05:36.414] (Bdeployment.extensions/nginx rolled back
W0516 02:05:36.514] error: unable to find specified revision 1000000 in history
I0516 02:05:37.545] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 02:05:37.659] (Bdeployment.extensions/nginx paused
W0516 02:05:39.037] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W0516 02:05:39.244] error: deployments.extensions "nginx" can't restart paused deployment (run rollout resume first)
I0516 02:05:39.416] deployment.extensions/nginx resumed
W0516 02:05:39.517] I0516 02:05:39.515912   51197 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1557972308-15587
I0516 02:05:39.618] deployment.extensions/nginx rolled back
I0516 02:05:39.882]     deployment.kubernetes.io/revision-history: 1,3
W0516 02:05:40.109] error: desired revision (3) is different from the running revision (5)
I0516 02:05:40.265] deployment.extensions/nginx restarted
W0516 02:05:40.366] I0516 02:05:40.318238   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972328-2190", Name:"nginx", UID:"3c22a3b9-9f90-4ba9-ad2c-7858fe12374e", APIVersion:"apps/v1", ResourceVersion:"2225", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-958dc566b to 2
W0516 02:05:40.366] I0516 02:05:40.323165   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972328-2190", Name:"nginx-958dc566b", UID:"7bdec5d2-60fa-47b9-8831-6382d6768f83", APIVersion:"apps/v1", ResourceVersion:"2229", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-958dc566b-sptv5
W0516 02:05:40.374] I0516 02:05:40.373446   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972328-2190", Name:"nginx", UID:"3c22a3b9-9f90-4ba9-ad2c-7858fe12374e", APIVersion:"apps/v1", ResourceVersion:"2228", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5c59d8c7db to 1
W0516 02:05:40.382] I0516 02:05:40.381824   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972328-2190", Name:"nginx-5c59d8c7db", UID:"70a4a0ee-efb6-439c-a5af-10b2db59d38a", APIVersion:"apps/v1", ResourceVersion:"2235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5c59d8c7db-b5d8x
I0516 02:05:41.572] Successful
... skipping 142 lines ...
I0516 02:05:43.174] (Bdeployment.extensions/nginx-deployment image updated
W0516 02:05:43.275] I0516 02:05:43.180332   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972328-2190", Name:"nginx-deployment", UID:"2ccfef0a-76b1-4f8e-94c1-2708ea6279d7", APIVersion:"apps/v1", ResourceVersion:"2294", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64f55cb875 to 1
W0516 02:05:43.275] I0516 02:05:43.188490   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972328-2190", Name:"nginx-deployment-64f55cb875", UID:"83876ca6-3108-436c-992d-e0c76840e8b3", APIVersion:"apps/v1", ResourceVersion:"2295", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64f55cb875-mscbp
I0516 02:05:43.376] apps.sh:345: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 02:05:43.437] (Bapps.sh:346: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0516 02:05:43.738] (Bdeployment.extensions/nginx-deployment image updated
W0516 02:05:43.838] error: unable to find container named "redis"
I0516 02:05:43.939] apps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 02:05:43.957] (Bapps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0516 02:05:44.101] (Bdeployment.apps/nginx-deployment image updated
I0516 02:05:44.222] apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 02:05:44.349] (Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0516 02:05:44.576] (Bapps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 48 lines ...
I0516 02:05:48.150] deployment.extensions/nginx-deployment env updated
W0516 02:05:48.251] I0516 02:05:48.159881   51197 horizontal.go:320] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1557972328-2190
W0516 02:05:48.252] I0516 02:05:48.252096   51197 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557972328-2190", Name:"nginx-deployment", UID:"412a554a-0668-41a2-859a-c8abcb0c7536", APIVersion:"apps/v1", ResourceVersion:"2447", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-57b54775 to 0
W0516 02:05:48.350] I0516 02:05:48.349544   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972328-2190", Name:"nginx-deployment-57b54775", UID:"6673e00f-51af-4046-869d-ddd6ced32422", APIVersion:"apps/v1", ResourceVersion:"2453", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-57b54775-vjhv4
I0516 02:05:48.451] deployment.extensions/nginx-deployment env updated
I0516 02:05:48.451] deployment.extensions "nginx-deployment" deleted
W0516 02:05:48.552] E0516 02:05:48.542201   51197 replica_set.go:450] Sync "namespace-1557972328-2190/nginx-deployment-57b54775" failed with replicasets.apps "nginx-deployment-57b54775" not found
I0516 02:05:48.652] configmap "test-set-env-config" deleted
I0516 02:05:48.690] secret "test-set-env-secret" deleted
I0516 02:05:48.837] +++ exit code: 0
I0516 02:05:49.405] Recording: run_rs_tests
I0516 02:05:49.405] Running command: run_rs_tests
I0516 02:05:49.511] 
... skipping 17 lines ...
W0516 02:05:50.527] I0516 02:05:50.431342   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972349-17904", Name:"frontend-no-cascade", UID:"66820fc1-a042-41f9-9ed5-fffdcfd2a9b5", APIVersion:"apps/v1", ResourceVersion:"2496", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-zhkwp
W0516 02:05:50.527] I0516 02:05:50.436413   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972349-17904", Name:"frontend-no-cascade", UID:"66820fc1-a042-41f9-9ed5-fffdcfd2a9b5", APIVersion:"apps/v1", ResourceVersion:"2496", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-6rwwk
W0516 02:05:50.528] I0516 02:05:50.436589   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557972349-17904", Name:"frontend-no-cascade", UID:"66820fc1-a042-41f9-9ed5-fffdcfd2a9b5", APIVersion:"apps/v1", ResourceVersion:"2496", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-fh4bp
I0516 02:05:50.628] apps.sh:526: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0516 02:05:50.629] (B+++ [0516 02:05:50] Deleting rs
I0516 02:05:50.631] replicaset.extensions "frontend-no-cascade" deleted
W0516 02:05:50.731] E0516 02:05:50.659171   51197 replica_set.go:450] Sync "namespace-1557972349-17904/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
I0516 02:05:50.832] apps.sh:530: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:05:50.841] (Bapps.sh:532: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0516 02:05:50.922] (Bpod "frontend-no-cascade-6rwwk" deleted
I0516 02:05:50.927] pod "frontend-no-cascade-fh4bp" deleted
I0516 02:05:50.933] pod "frontend-no-cascade-zhkwp" deleted
I0516 02:05:51.036] apps.sh:535: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 8 lines ...
I0516 02:05:51.619] Namespace:    namespace-1557972349-17904
I0516 02:05:51.619] Selector:     app=guestbook,tier=frontend
I0516 02:05:51.619] Labels:       app=guestbook
I0516 02:05:51.619]               tier=frontend
I0516 02:05:51.620] Annotations:  <none>
I0516 02:05:51.620] Replicas:     3 current / 3 desired
I0516 02:05:51.620] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:51.620] Pod Template:
I0516 02:05:51.620]   Labels:  app=guestbook
I0516 02:05:51.621]            tier=frontend
I0516 02:05:51.621]   Containers:
I0516 02:05:51.621]    php-redis:
I0516 02:05:51.621]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0516 02:05:51.730] Namespace:    namespace-1557972349-17904
I0516 02:05:51.730] Selector:     app=guestbook,tier=frontend
I0516 02:05:51.730] Labels:       app=guestbook
I0516 02:05:51.730]               tier=frontend
I0516 02:05:51.730] Annotations:  <none>
I0516 02:05:51.731] Replicas:     3 current / 3 desired
I0516 02:05:51.731] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:51.731] Pod Template:
I0516 02:05:51.731]   Labels:  app=guestbook
I0516 02:05:51.731]            tier=frontend
I0516 02:05:51.731]   Containers:
I0516 02:05:51.732]    php-redis:
I0516 02:05:51.732]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0516 02:05:51.865] Namespace:    namespace-1557972349-17904
I0516 02:05:51.865] Selector:     app=guestbook,tier=frontend
I0516 02:05:51.865] Labels:       app=guestbook
I0516 02:05:51.865]               tier=frontend
I0516 02:05:51.866] Annotations:  <none>
I0516 02:05:51.866] Replicas:     3 current / 3 desired
I0516 02:05:51.866] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:51.866] Pod Template:
I0516 02:05:51.866]   Labels:  app=guestbook
I0516 02:05:51.866]            tier=frontend
I0516 02:05:51.867]   Containers:
I0516 02:05:51.867]    php-redis:
I0516 02:05:51.867]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0516 02:05:51.990] Namespace:    namespace-1557972349-17904
I0516 02:05:51.990] Selector:     app=guestbook,tier=frontend
I0516 02:05:51.991] Labels:       app=guestbook
I0516 02:05:51.991]               tier=frontend
I0516 02:05:51.991] Annotations:  <none>
I0516 02:05:51.991] Replicas:     3 current / 3 desired
I0516 02:05:51.991] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:51.991] Pod Template:
I0516 02:05:51.992]   Labels:  app=guestbook
I0516 02:05:51.992]            tier=frontend
I0516 02:05:51.992]   Containers:
I0516 02:05:51.992]    php-redis:
I0516 02:05:51.992]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0516 02:05:52.129] Namespace:    namespace-1557972349-17904
I0516 02:05:52.129] Selector:     app=guestbook,tier=frontend
I0516 02:05:52.129] Labels:       app=guestbook
I0516 02:05:52.129]               tier=frontend
I0516 02:05:52.129] Annotations:  <none>
I0516 02:05:52.130] Replicas:     3 current / 3 desired
I0516 02:05:52.130] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:52.130] Pod Template:
I0516 02:05:52.130]   Labels:  app=guestbook
I0516 02:05:52.130]            tier=frontend
I0516 02:05:52.130]   Containers:
I0516 02:05:52.130]    php-redis:
I0516 02:05:52.131]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0516 02:05:52.237] Namespace:    namespace-1557972349-17904
I0516 02:05:52.237] Selector:     app=guestbook,tier=frontend
I0516 02:05:52.237] Labels:       app=guestbook
I0516 02:05:52.237]               tier=frontend
I0516 02:05:52.237] Annotations:  <none>
I0516 02:05:52.237] Replicas:     3 current / 3 desired
I0516 02:05:52.238] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:52.238] Pod Template:
I0516 02:05:52.238]   Labels:  app=guestbook
I0516 02:05:52.238]            tier=frontend
I0516 02:05:52.238]   Containers:
I0516 02:05:52.238]    php-redis:
I0516 02:05:52.239]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0516 02:05:52.367] Namespace:    namespace-1557972349-17904
I0516 02:05:52.367] Selector:     app=guestbook,tier=frontend
I0516 02:05:52.368] Labels:       app=guestbook
I0516 02:05:52.368]               tier=frontend
I0516 02:05:52.368] Annotations:  <none>
I0516 02:05:52.368] Replicas:     3 current / 3 desired
I0516 02:05:52.368] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:52.368] Pod Template:
I0516 02:05:52.369]   Labels:  app=guestbook
I0516 02:05:52.369]            tier=frontend
I0516 02:05:52.369]   Containers:
I0516 02:05:52.369]    php-redis:
I0516 02:05:52.369]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0516 02:05:52.480] Namespace:    namespace-1557972349-17904
I0516 02:05:52.480] Selector:     app=guestbook,tier=frontend
I0516 02:05:52.480] Labels:       app=guestbook
I0516 02:05:52.480]               tier=frontend
I0516 02:05:52.480] Annotations:  <none>
I0516 02:05:52.481] Replicas:     3 current / 3 desired
I0516 02:05:52.481] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 02:05:52.481] Pod Template:
I0516 02:05:52.481]   Labels:  app=guestbook
I0516 02:05:52.481]            tier=frontend
I0516 02:05:52.482]   Containers:
I0516 02:05:52.482]    php-redis:
I0516 02:05:52.482]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 180 lines ...
I0516 02:05:58.185] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 02:05:58.281] apps.sh:651: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0516 02:05:58.363] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0516 02:05:58.454] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 02:05:58.557] apps.sh:655: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0516 02:05:58.634] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0516 02:05:58.735] Error: required flag(s) "max" not set
W0516 02:05:58.735] 
W0516 02:05:58.736] 
W0516 02:05:58.736] Examples:
W0516 02:05:58.736]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0516 02:05:58.736]   kubectl autoscale deployment foo --min=2 --max=10
W0516 02:05:58.736]   
... skipping 89 lines ...
I0516 02:06:02.144] (Bapps.sh:439: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0516 02:06:02.261] (Bapps.sh:440: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0516 02:06:02.378] (Bstatefulset.apps/nginx rolled back
I0516 02:06:02.497] apps.sh:443: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0516 02:06:02.598] (Bapps.sh:444: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 02:06:02.726] (BSuccessful
I0516 02:06:02.726] message:error: unable to find specified revision 1000000 in history
I0516 02:06:02.727] has:unable to find specified revision
I0516 02:06:02.836] apps.sh:448: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0516 02:06:02.934] (Bapps.sh:449: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 02:06:03.055] (Bstatefulset.apps/nginx rolled back
I0516 02:06:03.167] apps.sh:452: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0516 02:06:03.290] (Bapps.sh:453: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0516 02:06:05.558] Name:         mock
I0516 02:06:05.558] Namespace:    namespace-1557972364-27475
I0516 02:06:05.558] Selector:     app=mock
I0516 02:06:05.558] Labels:       app=mock
I0516 02:06:05.559] Annotations:  <none>
I0516 02:06:05.559] Replicas:     1 current / 1 desired
I0516 02:06:05.559] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 02:06:05.559] Pod Template:
I0516 02:06:05.559]   Labels:  app=mock
I0516 02:06:05.560]   Containers:
I0516 02:06:05.560]    mock-container:
I0516 02:06:05.560]     Image:        k8s.gcr.io/pause:2.0
I0516 02:06:05.560]     Port:         9949/TCP
... skipping 56 lines ...
I0516 02:06:08.566] Name:         mock
I0516 02:06:08.566] Namespace:    namespace-1557972364-27475
I0516 02:06:08.566] Selector:     app=mock
I0516 02:06:08.567] Labels:       app=mock
I0516 02:06:08.567] Annotations:  <none>
I0516 02:06:08.567] Replicas:     1 current / 1 desired
I0516 02:06:08.567] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 02:06:08.567] Pod Template:
I0516 02:06:08.568]   Labels:  app=mock
I0516 02:06:08.568]   Containers:
I0516 02:06:08.568]    mock-container:
I0516 02:06:08.568]     Image:        k8s.gcr.io/pause:2.0
I0516 02:06:08.568]     Port:         9949/TCP
... skipping 56 lines ...
I0516 02:06:11.448] Name:         mock
I0516 02:06:11.448] Namespace:    namespace-1557972364-27475
I0516 02:06:11.449] Selector:     app=mock
I0516 02:06:11.449] Labels:       app=mock
I0516 02:06:11.449] Annotations:  <none>
I0516 02:06:11.449] Replicas:     1 current / 1 desired
I0516 02:06:11.449] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 02:06:11.450] Pod Template:
I0516 02:06:11.450]   Labels:  app=mock
I0516 02:06:11.450]   Containers:
I0516 02:06:11.450]    mock-container:
I0516 02:06:11.450]     Image:        k8s.gcr.io/pause:2.0
I0516 02:06:11.450]     Port:         9949/TCP
... skipping 42 lines ...
I0516 02:06:14.099] Namespace:    namespace-1557972364-27475
I0516 02:06:14.099] Selector:     app=mock
I0516 02:06:14.099] Labels:       app=mock
I0516 02:06:14.099]               status=replaced
I0516 02:06:14.100] Annotations:  <none>
I0516 02:06:14.100] Replicas:     1 current / 1 desired
I0516 02:06:14.100] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 02:06:14.100] Pod Template:
I0516 02:06:14.100]   Labels:  app=mock
I0516 02:06:14.100]   Containers:
I0516 02:06:14.101]    mock-container:
I0516 02:06:14.101]     Image:        k8s.gcr.io/pause:2.0
I0516 02:06:14.101]     Port:         9949/TCP
... skipping 11 lines ...
I0516 02:06:14.108] Namespace:    namespace-1557972364-27475
I0516 02:06:14.108] Selector:     app=mock2
I0516 02:06:14.108] Labels:       app=mock2
I0516 02:06:14.109]               status=replaced
I0516 02:06:14.109] Annotations:  <none>
I0516 02:06:14.109] Replicas:     1 current / 1 desired
I0516 02:06:14.109] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 02:06:14.109] Pod Template:
I0516 02:06:14.110]   Labels:  app=mock2
I0516 02:06:14.110]   Containers:
I0516 02:06:14.110]    mock-container:
I0516 02:06:14.110]     Image:        k8s.gcr.io/pause:2.0
I0516 02:06:14.110]     Port:         9949/TCP
... skipping 106 lines ...
I0516 02:06:29.598] +++ [0516 02:06:29] Testing persistent volumes
I0516 02:06:29.692] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:06:29.983] (Bpersistentvolume/pv0001 created
I0516 02:06:30.111] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0516 02:06:30.206] (Bpersistentvolume "pv0001" deleted
I0516 02:06:30.446] persistentvolume/pv0002 created
W0516 02:06:30.547] E0516 02:06:30.451133   51197 pv_protection_controller.go:117] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0516 02:06:30.647] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0516 02:06:30.667] (Bpersistentvolume "pv0002" deleted
I0516 02:06:31.028] persistentvolume/pv0003 created
W0516 02:06:31.129] E0516 02:06:31.033191   51197 pv_protection_controller.go:117] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0516 02:06:31.230] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0516 02:06:31.240] (Bpersistentvolume "pv0003" deleted
I0516 02:06:31.363] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 02:06:31.570] (Bpersistentvolume/pv0001 created
W0516 02:06:31.671] E0516 02:06:31.574994   51197 pv_protection_controller.go:117] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I0516 02:06:31.772] storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0516 02:06:31.833] (BSuccessful
I0516 02:06:31.833] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0516 02:06:31.834] persistentvolume "pv0001" deleted
I0516 02:06:31.834] has:warning: deleting cluster-scoped resources
I0516 02:06:31.835] Successful
... skipping 491 lines ...
I0516 02:06:46.559] yes
I0516 02:06:46.559] has:the server doesn't have a resource type
I0516 02:06:46.652] Successful
I0516 02:06:46.653] message:yes
I0516 02:06:46.653] has:yes
I0516 02:06:46.738] Successful
I0516 02:06:46.739] message:error: --subresource can not be used with NonResourceURL
I0516 02:06:46.739] has:subresource can not be used with NonResourceURL
I0516 02:06:46.823] Successful
I0516 02:06:46.913] Successful
I0516 02:06:46.913] message:yes
I0516 02:06:46.913] 0
I0516 02:06:46.914] has:0
... skipping 39 lines ...
I0516 02:06:47.710] role.rbac.authorization.k8s.io/testing-R reconciled
I0516 02:06:47.710] legacy-script.sh:801: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0516 02:06:47.818] (Blegacy-script.sh:802: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0516 02:06:47.921] (Blegacy-script.sh:803: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0516 02:06:48.026] (Blegacy-script.sh:804: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0516 02:06:48.132] (BSuccessful
I0516 02:06:48.133] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0516 02:06:48.133] has:only rbac.authorization.k8s.io/v1 is supported
I0516 02:06:48.236] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0516 02:06:48.241] role.rbac.authorization.k8s.io "testing-R" deleted
I0516 02:06:48.252] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0516 02:06:48.261] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0516 02:06:48.272] Recording: run_retrieve_multiple_tests
... skipping 33 lines ...
I0516 02:06:49.767] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0516 02:06:49.769] +++ working dir: /go/src/k8s.io/kubernetes
I0516 02:06:49.772] +++ command: run_kubectl_explain_tests
I0516 02:06:49.783] +++ [0516 02:06:49] Testing kubectl(v1:explain)
W0516 02:06:49.883] I0516 02:06:49.583000   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972408-14464", Name:"cassandra", UID:"d645d6a6-bce4-4c87-810a-06205f0f5db5", APIVersion:"v1", ResourceVersion:"3078", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-mslnp
W0516 02:06:49.884] I0516 02:06:49.601986   51197 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557972408-14464", Name:"cassandra", UID:"d645d6a6-bce4-4c87-810a-06205f0f5db5", APIVersion:"v1", ResourceVersion:"3078", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-qrpvz
W0516 02:06:49.884] E0516 02:06:49.607324   51197 replica_set.go:450] Sync "namespace-1557972408-14464/cassandra" failed with replicationcontrollers "cassandra" not found
I0516 02:06:49.985] KIND:     Pod
I0516 02:06:49.985] VERSION:  v1
I0516 02:06:49.985] 
I0516 02:06:49.985] DESCRIPTION:
I0516 02:06:49.986]      Pod is a collection of containers that can run on a host. This resource is
I0516 02:06:49.986]      created by clients and scheduled onto hosts.
... skipping 977 lines ...
I0516 02:07:20.114] message:node/127.0.0.1 already uncordoned (dry run)
I0516 02:07:20.114] has:already uncordoned
I0516 02:07:20.203] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0516 02:07:20.283] (Bnode/127.0.0.1 labeled
I0516 02:07:20.381] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0516 02:07:20.452] (BSuccessful
I0516 02:07:20.452] message:error: cannot specify both a node name and a --selector option
I0516 02:07:20.452] See 'kubectl drain -h' for help and examples
I0516 02:07:20.452] has:cannot specify both a node name
I0516 02:07:20.521] Successful
I0516 02:07:20.521] message:error: USAGE: cordon NODE [flags]
I0516 02:07:20.521] See 'kubectl cordon -h' for help and examples
I0516 02:07:20.521] has:error\: USAGE\: cordon NODE
I0516 02:07:20.601] node/127.0.0.1 already uncordoned
I0516 02:07:20.681] Successful
I0516 02:07:20.682] message:error: You must provide one or more resources by argument or filename.
I0516 02:07:20.682] Example resource specifications include:
I0516 02:07:20.682]    '-f rsrc.yaml'
I0516 02:07:20.682]    '--filename=rsrc.json'
I0516 02:07:20.682]    '<resource> <name>'
I0516 02:07:20.683]    '<resource>'
I0516 02:07:20.683] has:must provide one or more resources
... skipping 15 lines ...
I0516 02:07:21.459] Successful
I0516 02:07:21.460] message:The following compatible plugins are available:
I0516 02:07:21.460] 
I0516 02:07:21.460] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0516 02:07:21.460]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0516 02:07:21.461] 
I0516 02:07:21.461] error: one plugin warning was found
I0516 02:07:21.461] has:kubectl-version overwrites existing command: "kubectl version"
I0516 02:07:21.537] Successful
I0516 02:07:21.537] message:The following compatible plugins are available:
I0516 02:07:21.538] 
I0516 02:07:21.538] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0516 02:07:21.538] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0516 02:07:21.538]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0516 02:07:21.539] 
I0516 02:07:21.539] error: one plugin warning was found
I0516 02:07:21.539] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0516 02:07:21.608] Successful
I0516 02:07:21.608] message:The following compatible plugins are available:
I0516 02:07:21.608] 
I0516 02:07:21.608] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0516 02:07:21.609] has:plugins are available
I0516 02:07:21.678] Successful
I0516 02:07:21.679] message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
I0516 02:07:21.679] error: unable to find any kubectl plugins in your PATH
I0516 02:07:21.679] has:unable to find any kubectl plugins in your PATH
I0516 02:07:21.761] Successful
I0516 02:07:21.761] message:I am plugin foo
I0516 02:07:21.761] has:plugin foo
I0516 02:07:21.831] Successful
I0516 02:07:21.831] message:Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.0-alpha.0.61+2ebd40964b8b67", GitCommit:"2ebd40964b8b67b8501f9726bcff8d69b7e8f0df", GitTreeState:"clean", BuildDate:"2019-05-16T01:58:05Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0516 02:07:21.903] 
I0516 02:07:21.906] +++ Running case: test-cmd.run_impersonation_tests 
I0516 02:07:21.908] +++ working dir: /go/src/k8s.io/kubernetes
I0516 02:07:21.910] +++ command: run_impersonation_tests
I0516 02:07:21.919] +++ [0516 02:07:21] Testing impersonation
I0516 02:07:21.989] Successful
I0516 02:07:21.989] message:error: requesting groups or user-extra for  without impersonating a user
I0516 02:07:21.990] has:without impersonating a user
I0516 02:07:22.167] certificatesigningrequest.certificates.k8s.io/foo created
I0516 02:07:22.269] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0516 02:07:22.357] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0516 02:07:22.438] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0516 02:07:22.627] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 53 lines ...
W0516 02:07:25.908] I0516 02:07:25.900185   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.908] I0516 02:07:25.900197   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.909] I0516 02:07:25.900367   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.909] I0516 02:07:25.900400   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.909] I0516 02:07:25.900403   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.909] I0516 02:07:25.900423   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.910] W0516 02:07:25.900428   47831 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 02:07:25.910] I0516 02:07:25.900459   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.910] I0516 02:07:25.900591   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.910] W0516 02:07:25.900607   47831 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 02:07:25.911] W0516 02:07:25.900652   47831 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 02:07:25.911] W0516 02:07:25.900692   47831 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 02:07:25.911] I0516 02:07:25.900872   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.912] I0516 02:07:25.900907   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.912] I0516 02:07:25.901672   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.912] I0516 02:07:25.901813   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.912] I0516 02:07:25.901921   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.913] I0516 02:07:25.902023   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 33 lines ...
W0516 02:07:25.922] I0516 02:07:25.903802   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.922] I0516 02:07:25.903813   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.922] I0516 02:07:25.902549   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.923] I0516 02:07:25.903869   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.923] I0516 02:07:25.903884   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.923] I0516 02:07:25.903886   47831 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 02:07:25.923] E0516 02:07:25.903922   47831 controller.go:179] rpc error: code = Unavailable desc = transport is closing
I0516 02:07:26.024] No resources found
I0516 02:07:26.024] No resources found
I0516 02:07:26.024] +++ [0516 02:07:25] TESTS PASSED
I0516 02:07:26.025] junit report dir: /workspace/artifacts
I0516 02:07:26.025] +++ [0516 02:07:26] Clean up complete
W0516 02:07:26.125] + make test-integration
... skipping 34 lines ...
I0516 02:23:23.288] ok  	k8s.io/kubernetes/test/integration/openshift	0.338s
I0516 02:23:23.288] ok  	k8s.io/kubernetes/test/integration/pods	11.126s
I0516 02:23:23.289] ok  	k8s.io/kubernetes/test/integration/quota	8.315s
I0516 02:23:23.289] ok  	k8s.io/kubernetes/test/integration/replicaset	65.226s
I0516 02:23:23.289] ok  	k8s.io/kubernetes/test/integration/replicationcontroller	59.269s
I0516 02:23:23.289] ok  	k8s.io/kubernetes/test/integration/scale	7.515s
I0516 02:23:23.289] FAIL	k8s.io/kubernetes/test/integration/scheduler	530.399s
I0516 02:23:23.289] ok  	k8s.io/kubernetes/test/integration/scheduler_perf	0.196s
I0516 02:23:23.290] ok  	k8s.io/kubernetes/test/integration/secrets	3.908s
I0516 02:23:23.290] ok  	k8s.io/kubernetes/test/integration/serviceaccount	43.087s
I0516 02:23:23.290] ok  	k8s.io/kubernetes/test/integration/serving	63.819s
I0516 02:23:23.290] ok  	k8s.io/kubernetes/test/integration/statefulset	11.525s
I0516 02:23:23.290] ok  	k8s.io/kubernetes/test/integration/storageclasses	4.035s
I0516 02:23:23.291] ok  	k8s.io/kubernetes/test/integration/tls	8.812s
I0516 02:23:23.291] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.411s
I0516 02:23:23.291] ok  	k8s.io/kubernetes/test/integration/volume	98.480s
I0516 02:23:23.291] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	198.286s
I0516 02:23:40.572] +++ [0516 02:23:40] Saved JUnit XML test report to /workspace/artifacts/junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190516-020812.xml
I0516 02:23:40.576] Makefile:185: recipe for target 'test' failed
I0516 02:23:40.586] +++ [0516 02:23:40] Cleaning up etcd
W0516 02:23:40.686] make[1]: *** [test] Error 1
W0516 02:23:40.687] !!! [0516 02:23:40] Call tree:
W0516 02:23:40.687] !!! [0516 02:23:40]  1: hack/make-rules/test-integration.sh:102 runTests(...)
I0516 02:23:41.079] +++ [0516 02:23:41] Integration test cleanup complete
I0516 02:23:41.080] Makefile:204: recipe for target 'test-integration' failed
W0516 02:23:41.181] make: *** [test-integration] Error 1
W0516 02:23:51.892] Traceback (most recent call last):
W0516 02:23:51.892]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0516 02:23:51.916]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0516 02:23:51.916]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0516 02:23:51.917]     check(*cmd)
W0516 02:23:51.917]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0516 02:23:51.917]     subprocess.check_call(cmd)
W0516 02:23:51.917]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0516 02:23:51.926]     raise CalledProcessError(retcode, cmd)
W0516 02:23:51.927] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.14-v20190318-2ac98e338', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0516 02:23:51.976] Command failed
I0516 02:23:51.977] process 662 exited with code 1 after 34.9m
E0516 02:23:51.977] FAIL: pull-kubernetes-integration
I0516 02:23:51.977] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0516 02:23:58.359] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0516 02:23:58.426] process 111099 exited with code 0 after 0.1m
I0516 02:23:58.426] Call:  gcloud config get-value account
I0516 02:23:58.798] process 111111 exited with code 0 after 0.0m
I0516 02:23:58.799] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0516 02:23:58.799] Upload result and artifacts...
I0516 02:23:58.799] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/77955/pull-kubernetes-integration/1128839285433176064
I0516 02:23:58.800] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/77955/pull-kubernetes-integration/1128839285433176064/artifacts
W0516 02:24:04.486] CommandException: One or more URLs matched no objects.
E0516 02:24:04.621] Command failed
I0516 02:24:04.621] process 111123 exited with code 1 after 0.1m
W0516 02:24:04.622] Remote dir gs://kubernetes-jenkins/pr-logs/pull/77955/pull-kubernetes-integration/1128839285433176064/artifacts not exist yet
I0516 02:24:04.622] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/77955/pull-kubernetes-integration/1128839285433176064/artifacts
I0516 02:24:10.500] process 111265 exited with code 0 after 0.1m
W0516 02:24:10.501] metadata path /workspace/_artifacts/metadata.json does not exist
W0516 02:24:10.501] metadata not found or invalid, init with empty metadata
... skipping 22 lines ...