This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 648 succeeded
Started2019-03-15 00:04
Elapsed29m58s
Revision
Buildergke-prow-containerd-pool-99179761-dtm3
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/fdcf7ba6-fb67-4f1d-92cb-5db100dfbe1a/targets/test'}}
podbaaf950c-46b5-11e9-ab9f-0a580a6c0a8e
resultstorehttps://source.cloud.google.com/results/invocations/fdcf7ba6-fb67-4f1d-92cb-5db100dfbe1a/targets/test
infra-commitb6f1b9425
podbaaf950c-46b5-11e9-ab9f-0a580a6c0a8e
repok8s.io/kubernetes
repo-commitdfa25fcc7722d2c9f8c3e05bc48e822d6a956069
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 34s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
I0315 00:24:47.084900  124311 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0315 00:24:47.084926  124311 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0315 00:24:47.084934  124311 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0315 00:24:47.084947  124311 master.go:233] Using reconciler: 
I0315 00:24:47.086476  124311 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.086589  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.086607  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.086635  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.086684  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.086979  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.087043  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.087120  124311 store.go:1319] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0315 00:24:47.087191  124311 reflector.go:161] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0315 00:24:47.087170  124311 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.087640  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.087662  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.087704  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.087746  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.087981  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.088018  124311 store.go:1319] Monitoring events count at <storage-prefix>//events
I0315 00:24:47.088024  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.088054  124311 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.088151  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.088170  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.088205  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.088248  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.088462  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.088575  124311 store.go:1319] Monitoring limitranges count at <storage-prefix>//limitranges
I0315 00:24:47.088605  124311 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.088675  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.088693  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.088723  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.088787  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.088818  124311 reflector.go:161] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0315 00:24:47.088970  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.089441  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.089515  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.089600  124311 store.go:1319] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0315 00:24:47.089775  124311 reflector.go:161] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0315 00:24:47.089770  124311 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.089920  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.089937  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.089968  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.090019  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.090399  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.090520  124311 store.go:1319] Monitoring secrets count at <storage-prefix>//secrets
I0315 00:24:47.090615  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.090698  124311 reflector.go:161] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0315 00:24:47.090689  124311 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.090858  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.090876  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.090908  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.090942  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.091172  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.091292  124311 store.go:1319] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0315 00:24:47.091426  124311 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.091486  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.091523  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.091550  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.091583  124311 reflector.go:161] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0315 00:24:47.091608  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.091767  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.092055  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.092176  124311 store.go:1319] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0315 00:24:47.092252  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.092373  124311 reflector.go:161] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0315 00:24:47.093913  124311 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.094228  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.094256  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.094405  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.094784  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.097110  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.097649  124311 store.go:1319] Monitoring configmaps count at <storage-prefix>//configmaps
I0315 00:24:47.097670  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.098485  124311 reflector.go:161] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0315 00:24:47.103530  124311 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.104323  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.104373  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.105036  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.105694  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.110240  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.111454  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.111899  124311 store.go:1319] Monitoring namespaces count at <storage-prefix>//namespaces
I0315 00:24:47.112026  124311 reflector.go:161] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0315 00:24:47.112689  124311 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.112853  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.112867  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.112976  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.113082  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.114816  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.114980  124311 store.go:1319] Monitoring endpoints count at <storage-prefix>//endpoints
I0315 00:24:47.115417  124311 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.115536  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.115559  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.115592  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.115602  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.115653  124311 reflector.go:161] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0315 00:24:47.115812  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.117256  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.117467  124311 store.go:1319] Monitoring nodes count at <storage-prefix>//nodes
I0315 00:24:47.117544  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.117597  124311 reflector.go:161] Listing and watching *core.Node from storage/cacher.go:/nodes
I0315 00:24:47.117819  124311 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.117931  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.117945  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.118010  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.118077  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.124305  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.124604  124311 store.go:1319] Monitoring pods count at <storage-prefix>//pods
I0315 00:24:47.125446  124311 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.125617  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.125633  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.125674  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.125906  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.125977  124311 reflector.go:161] Listing and watching *core.Pod from storage/cacher.go:/pods
I0315 00:24:47.127254  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.127675  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.127830  124311 store.go:1319] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0315 00:24:47.128111  124311 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.128707  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.128739  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.128759  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.128783  124311 reflector.go:161] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0315 00:24:47.128805  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.128899  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.131406  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.131638  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.131770  124311 store.go:1319] Monitoring services count at <storage-prefix>//services
I0315 00:24:47.131818  124311 reflector.go:161] Listing and watching *core.Service from storage/cacher.go:/services
I0315 00:24:47.131829  124311 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.132000  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.132023  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.132074  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.132201  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.132704  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.132889  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.133156  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.133193  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.133236  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.133409  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.133876  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.134203  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.134470  124311 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.134605  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.134672  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.134773  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.134831  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.135208  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.135339  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.135402  124311 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0315 00:24:47.135983  124311 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0315 00:24:47.168144  124311 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0315 00:24:47.168200  124311 master.go:425] Enabling API group "authentication.k8s.io".
I0315 00:24:47.168217  124311 master.go:425] Enabling API group "authorization.k8s.io".
I0315 00:24:47.168387  124311 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.168551  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.168574  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.168623  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.168686  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.169082  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.169116  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.169322  124311 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0315 00:24:47.169401  124311 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0315 00:24:47.169550  124311 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.169647  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.169693  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.169735  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.169791  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.170094  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.170180  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.170294  124311 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0315 00:24:47.170348  124311 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0315 00:24:47.170448  124311 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.170585  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.170607  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.170637  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.170688  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.171789  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.171837  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.171907  124311 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0315 00:24:47.171936  124311 master.go:425] Enabling API group "autoscaling".
I0315 00:24:47.171949  124311 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0315 00:24:47.172094  124311 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.172187  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.172200  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.172228  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.172273  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.172703  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.172788  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.173051  124311 store.go:1319] Monitoring jobs.batch count at <storage-prefix>//jobs
I0315 00:24:47.173226  124311 reflector.go:161] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0315 00:24:47.173711  124311 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.173827  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.173879  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.173915  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.174051  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.174326  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.174565  124311 store.go:1319] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0315 00:24:47.174595  124311 master.go:425] Enabling API group "batch".
I0315 00:24:47.174637  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.174695  124311 reflector.go:161] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0315 00:24:47.174741  124311 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.174845  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.174876  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.174927  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.175002  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.175931  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.176005  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.176084  124311 store.go:1319] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0315 00:24:47.176783  124311 master.go:425] Enabling API group "certificates.k8s.io".
I0315 00:24:47.176106  124311 reflector.go:161] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0315 00:24:47.177105  124311 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.177227  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.177546  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.177623  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.177668  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.178100  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.178208  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.178626  124311 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0315 00:24:47.178674  124311 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0315 00:24:47.178789  124311 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.178906  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.178923  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.178961  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.179015  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.179949  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.180057  124311 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0315 00:24:47.180070  124311 master.go:425] Enabling API group "coordination.k8s.io".
I0315 00:24:47.180220  124311 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.180283  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.180298  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.180326  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.180406  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.180441  124311 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0315 00:24:47.180585  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.181223  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.181360  124311 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0315 00:24:47.181525  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.181479  124311 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.181604  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.181613  124311 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0315 00:24:47.181621  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.181658  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.181711  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.181973  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.182023  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.182142  124311 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0315 00:24:47.182188  124311 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0315 00:24:47.182287  124311 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.182362  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.182380  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.182416  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.182459  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.183892  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.184042  124311 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0315 00:24:47.184094  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.184184  124311 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.184257  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.184274  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.184302  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.184351  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.184402  124311 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0315 00:24:47.184625  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.184963  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.188579  124311 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0315 00:24:47.188659  124311 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0315 00:24:47.189880  124311 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.190071  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.190334  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.190409  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.190533  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.190871  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.190931  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.191059  124311 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0315 00:24:47.191183  124311 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0315 00:24:47.191232  124311 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.191304  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.191316  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.191344  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.191378  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.192199  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.192237  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.192341  124311 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0315 00:24:47.192434  124311 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0315 00:24:47.192523  124311 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.192597  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.192610  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.192638  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.192677  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.192998  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.193119  124311 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0315 00:24:47.193155  124311 master.go:425] Enabling API group "extensions".
I0315 00:24:47.193287  124311 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.193371  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.193386  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.193414  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.193537  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.193568  124311 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0315 00:24:47.193718  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.194024  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.194112  124311 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0315 00:24:47.194275  124311 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.194341  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.194352  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.194386  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.194510  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.194539  124311 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0315 00:24:47.194664  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.194944  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.195045  124311 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0315 00:24:47.195062  124311 master.go:425] Enabling API group "networking.k8s.io".
I0315 00:24:47.195092  124311 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.195165  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.195177  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.195206  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.195273  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.195298  124311 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0315 00:24:47.195402  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.195730  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.195854  124311 store.go:1319] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0315 00:24:47.195870  124311 master.go:425] Enabling API group "node.k8s.io".
I0315 00:24:47.196158  124311 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.196238  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.196259  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.196288  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.196385  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.196429  124311 reflector.go:161] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0315 00:24:47.196518  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.196922  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.196962  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.197090  124311 store.go:1319] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0315 00:24:47.197350  124311 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.197432  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.197445  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.197473  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.197589  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.197634  124311 reflector.go:161] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0315 00:24:47.197844  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.197968  124311 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0315 00:24:47.197983  124311 master.go:425] Enabling API group "policy".
I0315 00:24:47.198013  124311 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.198098  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.198113  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.198138  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.198155  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.198201  124311 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0315 00:24:47.198303  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.198586  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.198674  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.198676  124311 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0315 00:24:47.198696  124311 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0315 00:24:47.198828  124311 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.198896  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.198908  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.198938  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.199021  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.199363  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.199508  124311 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0315 00:24:47.199571  124311 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0315 00:24:47.199609  124311 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.199678  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.199698  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.199728  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.199679  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.199783  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.200051  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.200209  124311 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0315 00:24:47.200350  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.200372  124311 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.200468  124311 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0315 00:24:47.200556  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.200576  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.200604  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.200642  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.200882  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.201008  124311 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0315 00:24:47.201045  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.201056  124311 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.201123  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.201147  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.201179  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.201229  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.201243  124311 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0315 00:24:47.201453  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.201563  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.201565  124311 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0315 00:24:47.201583  124311 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0315 00:24:47.201720  124311 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.201789  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.201805  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.201836  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.201884  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.203842  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.203943  124311 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0315 00:24:47.203970  124311 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.204033  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.204045  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.204073  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.204180  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.204215  124311 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0315 00:24:47.204326  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.204598  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.204657  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.204699  124311 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0315 00:24:47.204759  124311 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0315 00:24:47.206413  124311 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.206530  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.206547  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.206578  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.206619  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.207228  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.207296  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.207463  124311 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0315 00:24:47.207513  124311 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0315 00:24:47.207557  124311 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0315 00:24:47.209272  124311 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.209529  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.209581  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.209630  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.209707  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.210715  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.210916  124311 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0315 00:24:47.211155  124311 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.211231  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.211249  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.211280  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.211288  124311 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0315 00:24:47.211283  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.211598  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.212029  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.212086  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.212150  124311 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0315 00:24:47.212169  124311 master.go:425] Enabling API group "scheduling.k8s.io".
I0315 00:24:47.212198  124311 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0315 00:24:47.212296  124311 master.go:417] Skipping disabled API group "settings.k8s.io".
I0315 00:24:47.212445  124311 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.212540  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.212555  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.212587  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.212633  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.213048  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.213124  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.213238  124311 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0315 00:24:47.213274  124311 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.213318  124311 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0315 00:24:47.213339  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.213351  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.213378  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.213431  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.213714  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.213750  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.213909  124311 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0315 00:24:47.213962  124311 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0315 00:24:47.213948  124311 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.214072  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.214088  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.214120  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.214181  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.215162  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.215268  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.215382  124311 store.go:1319] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0315 00:24:47.215418  124311 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.215434  124311 reflector.go:161] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0315 00:24:47.215507  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.215524  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.215552  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.215593  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.215849  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.215893  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.215997  124311 store.go:1319] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0315 00:24:47.216062  124311 reflector.go:161] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0315 00:24:47.216194  124311 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.216266  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.216279  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.216319  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.216366  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.216732  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.216840  124311 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0315 00:24:47.216867  124311 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.216935  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.216948  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.216975  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.217125  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.217168  124311 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0315 00:24:47.217318  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.217616  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.217686  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.217727  124311 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0315 00:24:47.217746  124311 master.go:425] Enabling API group "storage.k8s.io".
I0315 00:24:47.217805  124311 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0315 00:24:47.217897  124311 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.217972  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.217984  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.218011  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.218067  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.218384  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.218559  124311 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0315 00:24:47.218726  124311 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.218792  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.218810  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.218846  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.218972  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.219006  124311 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0315 00:24:47.219141  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.219586  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.219678  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.219717  124311 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0315 00:24:47.219773  124311 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0315 00:24:47.219894  124311 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.220029  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.220046  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.220077  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.220122  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.220461  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.220589  124311 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0315 00:24:47.220736  124311 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.220810  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.220822  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.220848  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.220933  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.220963  124311 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0315 00:24:47.221096  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.221380  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.221520  124311 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0315 00:24:47.221671  124311 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.221739  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.221751  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.221776  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.221841  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.221885  124311 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0315 00:24:47.221968  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.222243  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.222278  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.222359  124311 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0315 00:24:47.222433  124311 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0315 00:24:47.222544  124311 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.222619  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.222637  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.222672  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.222718  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.222993  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.223156  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.223168  124311 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0315 00:24:47.223439  124311 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.223871  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.223963  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.224043  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.223187  124311 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0315 00:24:47.224365  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.226768  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.226952  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.227264  124311 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0315 00:24:47.227444  124311 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0315 00:24:47.228046  124311 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.228211  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.228255  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.228332  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.228459  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.229429  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.229573  124311 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0315 00:24:47.229602  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.229643  124311 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0315 00:24:47.229741  124311 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.229819  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.229832  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.229881  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.229929  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.230744  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.230910  124311 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0315 00:24:47.231058  124311 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.231146  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.231164  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.231195  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.231281  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.231311  124311 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0315 00:24:47.231447  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.231888  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.232015  124311 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0315 00:24:47.232182  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.232214  124311 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.232284  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.232296  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.232324  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.232381  124311 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0315 00:24:47.232408  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.232668  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.232816  124311 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0315 00:24:47.232953  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.232966  124311 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.232999  124311 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0315 00:24:47.233037  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.233048  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.233075  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.233206  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.233425  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.233553  124311 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0315 00:24:47.233679  124311 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.233739  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.233752  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.233777  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.233802  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.233813  124311 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0315 00:24:47.233885  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.234138  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.234239  124311 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0315 00:24:47.234255  124311 master.go:425] Enabling API group "apps".
I0315 00:24:47.234284  124311 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.234341  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.234353  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.234387  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.234446  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.234468  124311 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0315 00:24:47.236351  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.236697  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.236741  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.236793  124311 store.go:1319] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0315 00:24:47.236813  124311 reflector.go:161] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0315 00:24:47.237264  124311 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.237351  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.237377  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.237409  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.237520  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.237912  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.238378  124311 store.go:1319] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0315 00:24:47.238416  124311 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0315 00:24:47.238466  124311 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"96e42bb6-2ed5-40eb-8504-1e0d7ebe56d7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 00:24:47.238691  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:47.238716  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:47.238744  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:47.238861  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.238899  124311 reflector.go:161] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0315 00:24:47.239062  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:47.239342  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:47.239385  124311 store.go:1319] Monitoring events count at <storage-prefix>//events
I0315 00:24:47.239405  124311 master.go:425] Enabling API group "events.k8s.io".
I0315 00:24:47.240615  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:24:47.243722  124311 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources.
W0315 00:24:47.251552  124311 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0315 00:24:47.257027  124311 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0315 00:24:47.258191  124311 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0315 00:24:47.261158  124311 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0315 00:24:47.272670  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.272696  124311 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0315 00:24:47.272702  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.272708  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.272712  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.272842  124311 wrap.go:47] GET /healthz: (247.434µs) 500
goroutine 16865 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a8920e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a8920e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00577afe0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000691380, 0xc0022b8340, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000691380, 0xc007056700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000691380, 0xc007056600)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000691380, 0xc007056600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a80d380, 0xc00a96ac20, 0x6387d00, 0xc000691380, 0xc007056600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57704]
I0315 00:24:47.273906  124311 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.305961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0315 00:24:47.276755  124311 wrap.go:47] GET /api/v1/services: (1.451108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0315 00:24:47.280376  124311 wrap.go:47] GET /api/v1/services: (1.034291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0315 00:24:47.282225  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.282303  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.282325  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.282332  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.282510  124311 wrap.go:47] GET /healthz: (361.529µs) 500
goroutine 16880 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00802a850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00802a850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00428c060, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0025746b0, 0xc0020a4600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed000)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0025746b0, 0xc002eed000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a75ed20, 0xc00a96ac20, 0x6387d00, 0xc0025746b0, 0xc002eed000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0315 00:24:47.283455  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.228615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57704]
I0315 00:24:47.284039  124311 wrap.go:47] GET /api/v1/services: (961.177µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.284085  124311 wrap.go:47] GET /api/v1/services: (1.183999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57708]
I0315 00:24:47.285166  124311 wrap.go:47] POST /api/v1/namespaces: (1.302422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0315 00:24:47.286616  124311 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.127988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.288225  124311 wrap.go:47] POST /api/v1/namespaces: (1.30357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.289610  124311 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (866.344µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.291304  124311 wrap.go:47] POST /api/v1/namespaces: (1.334866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.373728  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.373768  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.373793  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.373846  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.374063  124311 wrap.go:47] GET /healthz: (457.632µs) 500
goroutine 16891 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00804fe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00804fe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fdaf00, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ece5c0, 0xc002b42180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028400)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ece5c0, 0xc00a028400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0096cfda0, 0xc00a96ac20, 0x6387d00, 0xc007ece5c0, 0xc00a028400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57710]
I0315 00:24:47.384246  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.384282  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.384292  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.384299  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.384479  124311 wrap.go:47] GET /healthz: (363.407µs) 500
goroutine 16893 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00804ff10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00804ff10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fdb040, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ece5e8, 0xc002b42780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028b00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ece5e8, 0xc00a028b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0096cff80, 0xc00a96ac20, 0x6387d00, 0xc007ece5e8, 0xc00a028b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.473665  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.473698  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.473709  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.473716  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.473875  124311 wrap.go:47] GET /healthz: (334.669µs) 500
goroutine 16916 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a892690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a892690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00577bea0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000691488, 0xc00296a900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000691488, 0xc007057400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000691488, 0xc007057300)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000691488, 0xc007057300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a80d740, 0xc00a96ac20, 0x6387d00, 0xc000691488, 0xc007057300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57710]
I0315 00:24:47.484285  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.484325  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.484336  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.484343  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.484526  124311 wrap.go:47] GET /healthz: (388.202µs) 500
goroutine 16742 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082d7500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082d7500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0040c1260, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a41e0, 0xc002cd2480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a000)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a41e0, 0xc00a20a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009f067e0, 0xc00a96ac20, 0x6387d00, 0xc0083a41e0, 0xc00a20a000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.573632  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.573664  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.573674  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.573681  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.573854  124311 wrap.go:47] GET /healthz: (361.058µs) 500
goroutine 16918 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a8927e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a8927e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f0a400, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0006914d0, 0xc00296af00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0006914d0, 0xc007057b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0006914d0, 0xc007057a00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0006914d0, 0xc007057a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a80d920, 0xc00a96ac20, 0x6387d00, 0xc0006914d0, 0xc007057a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57710]
I0315 00:24:47.584234  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.584269  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.584279  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.584286  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.584435  124311 wrap.go:47] GET /healthz: (345.8µs) 500
goroutine 16744 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082d7730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082d7730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0040c14e0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a4208, 0xc002cd2a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a700)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a4208, 0xc00a20a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009f069c0, 0xc00a96ac20, 0x6387d00, 0xc0083a4208, 0xc00a20a700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.673661  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.673695  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.673705  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.673711  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.673892  124311 wrap.go:47] GET /healthz: (371.984µs) 500
goroutine 16895 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00979c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00979c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fdb200, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ece5f0, 0xc002b42c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a029000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a028f00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ece5f0, 0xc00a028f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a912060, 0xc00a96ac20, 0x6387d00, 0xc007ece5f0, 0xc00a028f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57710]
I0315 00:24:47.684344  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.684379  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.684388  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.684393  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.684628  124311 wrap.go:47] GET /healthz: (403.788µs) 500
goroutine 16897 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00979c0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00979c0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fdb3a0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ece618, 0xc002b43080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ece618, 0xc00a029700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ece618, 0xc00a029600)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ece618, 0xc00a029600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a9121e0, 0xc00a96ac20, 0x6387d00, 0xc007ece618, 0xc00a029600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.773661  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.773702  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.773713  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.773720  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.773906  124311 wrap.go:47] GET /healthz: (372.338µs) 500
goroutine 16920 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a892a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a892a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f0ab00, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000691558, 0xc00296b680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000691558, 0xc00a724800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000691558, 0xc00a724700)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000691558, 0xc00a724700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a80dce0, 0xc00a96ac20, 0x6387d00, 0xc000691558, 0xc00a724700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57710]
I0315 00:24:47.784288  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.784328  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.784339  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.784347  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.784595  124311 wrap.go:47] GET /healthz: (415.14µs) 500
goroutine 16746 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082d78f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082d78f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0040c1920, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a4230, 0xc002cd3680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20ae00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a4230, 0xc00a20ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009f06c00, 0xc00a96ac20, 0x6387d00, 0xc0083a4230, 0xc00a20ae00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.873622  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.873658  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.873668  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.873675  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.873824  124311 wrap.go:47] GET /healthz: (349.688µs) 500
goroutine 16947 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00979c230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00979c230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003fdb680, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ece640, 0xc002b43680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ece640, 0xc00a029e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ece640, 0xc00a029d00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ece640, 0xc00a029d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a9123c0, 0xc00a96ac20, 0x6387d00, 0xc007ece640, 0xc00a029d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57710]
I0315 00:24:47.884277  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.884317  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.884327  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.884334  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.884479  124311 wrap.go:47] GET /healthz: (327.376µs) 500
goroutine 16748 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082d7a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082d7a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0040c19c0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a4238, 0xc002cd3b00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b200)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a4238, 0xc00a20b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009f06cc0, 0xc00a96ac20, 0x6387d00, 0xc0083a4238, 0xc00a20b200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:47.973733  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.973773  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.973784  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.973792  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.973994  124311 wrap.go:47] GET /healthz: (391.848µs) 500
goroutine 16922 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a892c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a892c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f0ade0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000691580, 0xc00296bc80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000691580, 0xc00a724e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000691580, 0xc00a724d00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000691580, 0xc00a724d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a80dec0, 0xc00a96ac20, 0x6387d00, 0xc000691580, 0xc00a724d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57710]
I0315 00:24:47.985701  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:47.985738  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:47.985749  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:47.985757  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:47.985908  124311 wrap.go:47] GET /healthz: (338.447µs) 500
goroutine 16924 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a892d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a892d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f0af00, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0006915b8, 0xc005fe4180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725400)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0006915b8, 0xc00a725400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004ec0060, 0xc00a96ac20, 0x6387d00, 0xc0006915b8, 0xc00a725400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:48.073677  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:48.073711  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.073721  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:48.073728  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:48.073895  124311 wrap.go:47] GET /healthz: (323.264µs) 500
goroutine 16750 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082d7c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082d7c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0040c1d20, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a4260, 0xc0060b6300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b800)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a4260, 0xc00a20b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009f06f60, 0xc00a96ac20, 0x6387d00, 0xc0083a4260, 0xc00a20b800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57710]
I0315 00:24:48.084270  124311 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 00:24:48.084307  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.084318  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:48.084327  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:48.084484  124311 wrap.go:47] GET /healthz: (356.496µs) 500
goroutine 16926 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a892ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a892ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f0afa0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0006915c0, 0xc005fe4600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725800)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0006915c0, 0xc00a725800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004ec0120, 0xc00a96ac20, 0x6387d00, 0xc0006915c0, 0xc00a725800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:48.085437  124311 clientconn.go:551] parsed scheme: ""
I0315 00:24:48.085469  124311 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 00:24:48.085531  124311 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 00:24:48.085608  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:48.086080  124311 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 00:24:48.086159  124311 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 00:24:48.174778  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.174810  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:48.174818  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:48.175011  124311 wrap.go:47] GET /healthz: (1.531073ms) 500
goroutine 16752 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0082d7d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0082d7d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00392e060, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a4288, 0xc0033f82c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20be00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a4288, 0xc00a20be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009f07140, 0xc00a96ac20, 0x6387d00, 0xc0083a4288, 0xc00a20be00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57710]
I0315 00:24:48.185354  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.185389  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:48.185397  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:48.185612  124311 wrap.go:47] GET /healthz: (1.422183ms) 500
goroutine 16940 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005f380e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005f380e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003f3efc0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002574af0, 0xc0033f89a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002574af0, 0xc00a0cce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002574af0, 0xc00a0ccd00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002574af0, 0xc00a0ccd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a75f920, 0xc00a96ac20, 0x6387d00, 0xc002574af0, 0xc00a0ccd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:48.274323  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.274361  124311 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 00:24:48.274369  124311 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 00:24:48.274374  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.51372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57704]
I0315 00:24:48.274587  124311 wrap.go:47] GET /healthz: (983.262µs) 500
goroutine 16845 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007fcb960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007fcb960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00424cfe0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a6a48, 0xc002660160, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632000)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a6a48, 0xc00a632000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a306ea0, 0xc00a96ac20, 0x6387d00, 0xc0083a6a48, 0xc00a632000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:48.274941  124311 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.069392ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:48.275798  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.690701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.276010  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.045318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57704]
I0315 00:24:48.278191  124311 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.073297ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.278535  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.193067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57704]
I0315 00:24:48.278871  124311 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.419751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:48.279118  124311 storage_scheduling.go:113] created PriorityClass system-node-critical with value 2000001000
I0315 00:24:48.280598  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.742145ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57704]
I0315 00:24:48.280760  124311 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (2.090424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.280949  124311 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.659287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57710]
I0315 00:24:48.282186  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (851.775µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57704]
I0315 00:24:48.282777  124311 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.431715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.282957  124311 storage_scheduling.go:113] created PriorityClass system-cluster-critical with value 2000000000
I0315 00:24:48.282978  124311 storage_scheduling.go:122] all system priority classes are created successfully or already exist.
I0315 00:24:48.283802  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.325544ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57704]
I0315 00:24:48.284750  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.284943  124311 wrap.go:47] GET /healthz: (979.421µs) 500
goroutine 17027 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0060fa620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0060fa620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d80cc0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a6b50, 0xc0035f32c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbeb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbea00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a6b50, 0xc00afbea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ad82ae0, 0xc00a96ac20, 0x6387d00, 0xc0083a6b50, 0xc00afbea00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.285142  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.016279ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57704]
I0315 00:24:48.286350  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (682.01µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.287487  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (772.496µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.288634  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (794.469µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.291257  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.772737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.291446  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0315 00:24:48.292478  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (861.228µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.294351  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.516623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.294677  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0315 00:24:48.295722  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (882.882µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.297980  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.85046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.298383  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0315 00:24:48.300431  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.815668ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.302613  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.59602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.302828  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0315 00:24:48.303934  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (899.124µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.305774  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.43326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.306093  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0315 00:24:48.307124  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (830.534µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.308787  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.273506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.308993  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0315 00:24:48.310136  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (821.135µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.312236  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.666149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.312445  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0315 00:24:48.313597  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (919.548µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.315959  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.938689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.316201  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0315 00:24:48.317623  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.223946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.320526  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.388895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.320819  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0315 00:24:48.322069  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.059552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.324409  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.804311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.324721  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0315 00:24:48.325907  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (927.265µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.327936  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.595558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.328305  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0315 00:24:48.329365  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (840.266µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.332009  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.174686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.332339  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0315 00:24:48.333720  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.047283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.335693  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.562435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.335916  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0315 00:24:48.336877  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (760.179µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.338934  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.654276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.339198  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0315 00:24:48.340547  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.14936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.342430  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.506163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.342631  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0315 00:24:48.343611  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (814.856µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.345707  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.520981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.345939  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0315 00:24:48.346931  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (809.069µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.348687  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.34285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.348893  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0315 00:24:48.349851  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (789.341µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.351730  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.499347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.352009  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0315 00:24:48.353065  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (853.596µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.355167  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.638461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.355458  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0315 00:24:48.356637  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (891.508µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.359376  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.176238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.359698  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0315 00:24:48.370354  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (9.849873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.375824  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.376031  124311 wrap.go:47] GET /healthz: (1.390771ms) 500
goroutine 17099 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0086b33b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0086b33b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0024a7160, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a70c8, 0xc001ebeb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb800)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a70c8, 0xc0086bb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0085c4f60, 0xc00a96ac20, 0x6387d00, 0xc0083a70c8, 0xc0086bb800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:48.378965  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.922141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.379374  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0315 00:24:48.383856  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (4.255241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.386463  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.037941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.386708  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0315 00:24:48.386793  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.386999  124311 wrap.go:47] GET /healthz: (2.866174ms) 500
goroutine 17110 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00841d730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00841d730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00257e660, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007eced20, 0xc002516780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffd00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007eced20, 0xc0083ffd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0086fe5a0, 0xc00a96ac20, 0x6387d00, 0xc007eced20, 0xc0083ffd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.388469  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.534708ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.391144  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.253216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.391388  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0315 00:24:48.392538  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (943.605µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.394583  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.394745  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0315 00:24:48.395760  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (823.412µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.398811  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.614005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.399032  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0315 00:24:48.400186  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (946.289µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.402470  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.816148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.402725  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0315 00:24:48.403680  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (791.511µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.405430  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.339397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.405693  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0315 00:24:48.406654  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (774.418µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.408399  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.347132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.408726  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0315 00:24:48.409657  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (763.214µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.411583  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.58421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.411838  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0315 00:24:48.412898  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (862.423µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.414958  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.6374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.415142  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0315 00:24:48.416814  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.329905ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.418689  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.487337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.418940  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0315 00:24:48.420062  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (856.286µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.422040  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.422306  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0315 00:24:48.423350  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (828.386µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.425237  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.485707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.425463  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0315 00:24:48.426444  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (778.473µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.428295  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.428881  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0315 00:24:48.429885  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (805.191µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.431733  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.47565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.432001  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0315 00:24:48.433529  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (847.555µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.435618  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.685748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.436054  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0315 00:24:48.437533  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.233506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.439345  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.454158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.439538  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0315 00:24:48.440583  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (847.866µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.442728  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.655684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.442927  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0315 00:24:48.443946  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (827.121µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.445675  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.303862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.445929  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0315 00:24:48.446935  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (776.054µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.448831  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.485769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.449042  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0315 00:24:48.450115  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (867.253µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.452012  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.478791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.452377  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0315 00:24:48.453439  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (875.302µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.455377  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.442274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.455617  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0315 00:24:48.457542  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.723234ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.460098  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.989952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.460301  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0315 00:24:48.461241  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (771.112µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.463555  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.712369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.463829  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0315 00:24:48.464906  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (897.281µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.466762  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.466970  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0315 00:24:48.468011  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (837.033µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.469776  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.427041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.470037  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0315 00:24:48.471572  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.256649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.473937  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.980617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.474123  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0315 00:24:48.474194  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.474346  124311 wrap.go:47] GET /healthz: (871.289µs) 500
goroutine 17124 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0089e9260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0089e9260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002d181e0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc00606f078, 0xc003d7c500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc00606f078, 0xc008212500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc00606f078, 0xc008212400)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc00606f078, 0xc008212400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0086af740, 0xc00a96ac20, 0x6387d00, 0xc00606f078, 0xc008212400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:48.475699  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.300477ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.480251  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.119225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.480653  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0315 00:24:48.481730  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (913.058µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.483912  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.615889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.484169  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0315 00:24:48.484823  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.484975  124311 wrap.go:47] GET /healthz: (867.547µs) 500
goroutine 17283 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009f60700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009f60700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002d5f1a0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ecf320, 0xc0002dd900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9500)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ecf320, 0xc0097f9500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009915260, 0xc00a96ac20, 0x6387d00, 0xc007ecf320, 0xc0097f9500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.485278  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (952.698µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.487275  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.567761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.487564  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0315 00:24:48.488765  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (960.63µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.494703  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.747028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.495031  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0315 00:24:48.514628  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.518724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.535288  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.187359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.535633  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0315 00:24:48.554695  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.572183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.574930  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.575193  124311 wrap.go:47] GET /healthz: (1.780315ms) 500
goroutine 17305 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009e07a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009e07a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ec3400, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a5408, 0xc00237e3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1500)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a5408, 0xc00a0d1500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a0699e0, 0xc00a96ac20, 0x6387d00, 0xc0083a5408, 0xc00a0d1500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:48.575639  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.237663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.576171  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0315 00:24:48.585288  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.585467  124311 wrap.go:47] GET /healthz: (1.052282ms) 500
goroutine 17285 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009f617a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009f617a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ee8920, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ecf508, 0xc003d7c8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33c900)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ecf508, 0xc00a33c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a372360, 0xc00a96ac20, 0x6387d00, 0xc007ecf508, 0xc00a33c900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.594143  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.052931ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.615547  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.383893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.615843  124311 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0315 00:24:48.634672  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.569135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.655283  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.060788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.655587  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0315 00:24:48.675858  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.676004  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.926754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.678375  124311 wrap.go:47] GET /healthz: (5.013788ms) 500
goroutine 17292 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a5260e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a5260e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f182e0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ecf5f0, 0xc003d7cdc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d800)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ecf5f0, 0xc00a33d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a372ea0, 0xc00a96ac20, 0x6387d00, 0xc007ecf5f0, 0xc00a33d800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:48.685157  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.685380  124311 wrap.go:47] GET /healthz: (1.258919ms) 500
goroutine 17135 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a3b6a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a3b6a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f16fe0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc00606f418, 0xc002517900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc00606f418, 0xc00a401400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc00606f418, 0xc00a401300)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc00606f418, 0xc00a401300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a185ce0, 0xc00a96ac20, 0x6387d00, 0xc00606f418, 0xc00a401300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.699204  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.121058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.699522  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0315 00:24:48.714371  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.259494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.735216  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.085463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.735466  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0315 00:24:48.754746  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.70577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.774639  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.774902  124311 wrap.go:47] GET /healthz: (1.446169ms) 500
goroutine 17296 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a526620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a526620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f19240, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ecf678, 0xc003d7d2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8800)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ecf678, 0xc00a5f8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a373aa0, 0xc00a96ac20, 0x6387d00, 0xc007ecf678, 0xc00a5f8800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:48.775333  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.201439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.775589  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0315 00:24:48.785389  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.785662  124311 wrap.go:47] GET /healthz: (1.478713ms) 500
goroutine 17276 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a586930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a586930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f46c20, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002575c18, 0xc0035f3b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002575c18, 0xc00a1aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002575c18, 0xc00a1afe00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002575c18, 0xc00a1afe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a1d3080, 0xc00a96ac20, 0x6387d00, 0xc002575c18, 0xc00a1afe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.794567  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.463182ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.815144  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.016421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.815391  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0315 00:24:48.834230  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.202876ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.855338  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.277147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.855650  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0315 00:24:48.874485  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.397403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:48.874559  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.874769  124311 wrap.go:47] GET /healthz: (1.327815ms) 500
goroutine 17281 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a587260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a587260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002fd06e0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002575d48, 0xc002b0eb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb000)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002575d48, 0xc00a6bb000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a1d3800, 0xc00a96ac20, 0x6387d00, 0xc002575d48, 0xc00a6bb000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:48.885452  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.885697  124311 wrap.go:47] GET /healthz: (1.436965ms) 500
goroutine 17322 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a3b7f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a3b7f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002faf080, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc00606f6c8, 0xc002b0f540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611d00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc00606f6c8, 0xc00a611d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a784000, 0xc00a96ac20, 0x6387d00, 0xc00606f6c8, 0xc00a611d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.895177  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.068816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.895477  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0315 00:24:48.914438  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.330463ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.935675  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.599492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.935959  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0315 00:24:48.954468  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.402141ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.974353  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.974598  124311 wrap.go:47] GET /healthz: (1.250479ms) 500
goroutine 17347 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a033a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a033a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002eb7d00, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002e11000, 0xc003d7d7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae600)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002e11000, 0xc00a3ae600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a2d3140, 0xc00a96ac20, 0x6387d00, 0xc002e11000, 0xc00a3ae600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:48.975111  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.978921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.975380  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0315 00:24:48.985170  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:48.985436  124311 wrap.go:47] GET /healthz: (1.260371ms) 500
goroutine 17363 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a5879d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a5879d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002fd1ca0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002575e98, 0xc002c28280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002575e98, 0xc00a820600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002575e98, 0xc00a820500)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002575e98, 0xc00a820500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a884060, 0xc00a96ac20, 0x6387d00, 0xc002575e98, 0xc00a820500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:48.994536  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.449413ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.015405  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.153671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.015764  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0315 00:24:49.034662  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.525716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.055732  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.62239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.056020  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0315 00:24:49.074562  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.074675  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.428554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.074773  124311 wrap.go:47] GET /healthz: (1.339846ms) 500
goroutine 17367 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a914070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a914070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003069640, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002575fa0, 0xc002c288c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821600)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002575fa0, 0xc00a821600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a884a20, 0xc00a96ac20, 0x6387d00, 0xc002575fa0, 0xc00a821600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:49.085376  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.085661  124311 wrap.go:47] GET /healthz: (1.468091ms) 500
goroutine 17369 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a9141c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a9141c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003069a40, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002575fe0, 0xc002c28c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821e00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002575fe0, 0xc00a821e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a884d80, 0xc00a96ac20, 0x6387d00, 0xc002575fe0, 0xc00a821e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.095692  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.557109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.095964  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0315 00:24:49.114743  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.571943ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.135718  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.641873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.135955  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0315 00:24:49.154764  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.595794ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.175377  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.29623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.176002  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.176070  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0315 00:24:49.176176  124311 wrap.go:47] GET /healthz: (2.568869ms) 500
goroutine 17214 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a8f8690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a8f8690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002dc3a40, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0005d70a8, 0xc00237ec80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fd00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0005d70a8, 0xc009f1fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009efb1a0, 0xc00a96ac20, 0x6387d00, 0xc0005d70a8, 0xc009f1fd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:49.185094  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.185301  124311 wrap.go:47] GET /healthz: (1.175546ms) 500
goroutine 17216 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a8f87e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a8f87e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002dc3ca0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0005d70d8, 0xc0028883c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76400)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0005d70d8, 0xc008a76400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009efb500, 0xc00a96ac20, 0x6387d00, 0xc0005d70d8, 0xc008a76400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.194938  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.35038ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.215589  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.450084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.215873  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0315 00:24:49.234862  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.609516ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.255256  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.127187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.255556  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0315 00:24:49.277415  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.277554  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.193205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.277608  124311 wrap.go:47] GET /healthz: (1.478058ms) 500
goroutine 17355 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a94abd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a94abd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0030ba5c0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002e11200, 0xc003d7db80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002e11200, 0xc00a3aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002e11200, 0xc00a3afe00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002e11200, 0xc00a3afe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008aae120, 0xc00a96ac20, 0x6387d00, 0xc002e11200, 0xc00a3afe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:49.285431  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.285642  124311 wrap.go:47] GET /healthz: (1.421518ms) 500
goroutine 17357 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a94ae00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a94ae00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0030bab60, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002e11238, 0xc002c29400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002e11238, 0xc005576600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002e11238, 0xc005576500)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002e11238, 0xc005576500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008aae8a0, 0xc00a96ac20, 0x6387d00, 0xc002e11238, 0xc005576500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.295270  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.206859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.295675  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0315 00:24:49.314678  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.573616ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.335974  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.834594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.336255  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0315 00:24:49.354707  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.570376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.374517  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.374694  124311 wrap.go:47] GET /healthz: (1.325631ms) 500
goroutine 17376 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a915490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a915490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0031224e0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a3ef90, 0xc00237f400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937c00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a3ef90, 0xc00a937c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007e925a0, 0xc00a96ac20, 0x6387d00, 0xc000a3ef90, 0xc00a937c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:49.374932  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.823639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.375175  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0315 00:24:49.386339  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.386759  124311 wrap.go:47] GET /healthz: (2.124064ms) 500
goroutine 17387 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a9fac40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a9fac40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003114140, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a5a38, 0xc002c29a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975c00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a5a38, 0xc00a975c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008bb8b40, 0xc00a96ac20, 0x6387d00, 0xc0083a5a38, 0xc00a975c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.394304  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.23828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.415344  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.254684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.415718  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0315 00:24:49.434621  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.480919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.455864  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.821533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.456103  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0315 00:24:49.475087  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.023497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.476384  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.476627  124311 wrap.go:47] GET /healthz: (1.989019ms) 500
goroutine 17436 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004a764d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004a764d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003185180, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a3f290, 0xc0070fe3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a3f290, 0xc005b66000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a3f290, 0xc004c27f00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a3f290, 0xc004c27f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007e93a40, 0xc00a96ac20, 0x6387d00, 0xc000a3f290, 0xc004c27f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:49.485239  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.485461  124311 wrap.go:47] GET /healthz: (1.399276ms) 500
goroutine 17389 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a9fb340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a9fb340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003115300, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a5af8, 0xc0055ae500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2c00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a5af8, 0xc004bc2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008bb9080, 0xc00a96ac20, 0x6387d00, 0xc0083a5af8, 0xc004bc2c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.495483  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.40414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.495725  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0315 00:24:49.514430  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.347608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.535596  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.51356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.535847  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0315 00:24:49.554425  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.337185ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.574633  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.574829  124311 wrap.go:47] GET /healthz: (1.387656ms) 500
goroutine 17440 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004a76cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004a76cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0031e6360, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a3f350, 0xc000077540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66d00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a3f350, 0xc005b66d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005b4cc60, 0xc00a96ac20, 0x6387d00, 0xc000a3f350, 0xc005b66d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:49.575198  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.065971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.575442  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0315 00:24:49.585009  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.585255  124311 wrap.go:47] GET /healthz: (1.144922ms) 500
goroutine 17474 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004a77030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004a77030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0031e6840, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a3f3a0, 0xc0055aec80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67200)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a3f3a0, 0xc005b67200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005b4d020, 0xc00a96ac20, 0x6387d00, 0xc000a3f3a0, 0xc005b67200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.594516  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.331288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.615326  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.185468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.615582  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0315 00:24:49.634585  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.423117ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.655145  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.973059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.655403  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0315 00:24:49.674398  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.674563  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.465649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.674614  124311 wrap.go:47] GET /healthz: (1.195399ms) 500
goroutine 17507 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004dde150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004dde150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0031e0be0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a5d50, 0xc0055af180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cb00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a5d50, 0xc00598cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005150480, 0xc00a96ac20, 0x6387d00, 0xc0083a5d50, 0xc00598cb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:49.685302  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.685610  124311 wrap.go:47] GET /healthz: (1.432729ms) 500
goroutine 17412 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008139030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008139030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00321ada0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ecfbb8, 0xc0030c6140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4900)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ecfbb8, 0xc006ef4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00802d440, 0xc00a96ac20, 0x6387d00, 0xc007ecfbb8, 0xc006ef4900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.695706  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.624789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.696084  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0315 00:24:49.714648  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.506791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.735380  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.31552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.735656  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0315 00:24:49.754693  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.547299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.774580  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.774897  124311 wrap.go:47] GET /healthz: (1.48373ms) 500
goroutine 17483 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004a77810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004a77810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00323ed40, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a3f550, 0xc0030c6500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d600)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a3f550, 0xc00203d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0000426c0, 0xc00a96ac20, 0x6387d00, 0xc000a3f550, 0xc00203d600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:49.775292  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.008491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.775589  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0315 00:24:49.785216  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.785425  124311 wrap.go:47] GET /healthz: (1.272756ms) 500
goroutine 17464 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a791f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a791f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0031d8c60, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc00606fef8, 0xc002092280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13600)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc00606fef8, 0xc004a13600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a785c80, 0xc00a96ac20, 0x6387d00, 0xc00606fef8, 0xc004a13600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.794604  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.527179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.815388  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.17893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.815727  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0315 00:24:49.834787  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.655042ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.855324  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.225946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.855627  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0315 00:24:49.874376  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.874484  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.329449ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:49.874602  124311 wrap.go:47] GET /healthz: (1.147541ms) 500
goroutine 17497 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0053cc230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0053cc230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0032caa40, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0005d77a0, 0xc002888c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f800)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0005d77a0, 0xc005a2f800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005174120, 0xc00a96ac20, 0x6387d00, 0xc0005d77a0, 0xc005a2f800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:49.885248  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.885481  124311 wrap.go:47] GET /healthz: (1.308743ms) 500
goroutine 17524 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0054e8a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0054e8a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00330a7c0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a3f760, 0xc002092640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7300)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a3f760, 0xc0055d7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005176660, 0xc00a96ac20, 0x6387d00, 0xc000a3f760, 0xc0055d7300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.895338  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.26797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.895633  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0315 00:24:49.914707  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.564518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.935199  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.178181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.935538  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0315 00:24:49.954658  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.532493ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.974470  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.974799  124311 wrap.go:47] GET /healthz: (1.37291ms) 500
goroutine 17424 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0058824d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0058824d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00327da00, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ecfe18, 0xc0055af680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880b00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ecfe18, 0xc005880b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004dc83c0, 0xc00a96ac20, 0x6387d00, 0xc007ecfe18, 0xc005880b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:49.975570  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.439949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.975823  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0315 00:24:49.985039  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:49.985298  124311 wrap.go:47] GET /healthz: (1.125174ms) 500
goroutine 17499 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0053cc5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0053cc5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00332a520, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0005d7800, 0xc0030c68c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2fe00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0005d7800, 0xc005a2fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0051744e0, 0xc00a96ac20, 0x6387d00, 0xc0005d7800, 0xc005a2fe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:49.994511  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.393203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.023647  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (10.231744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.024156  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0315 00:24:50.035141  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.250806ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.055627  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.581708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.055882  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0315 00:24:50.074664  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.074705  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.595454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.074853  124311 wrap.go:47] GET /healthz: (1.452674ms) 500
goroutine 17538 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005882930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005882930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0033d2680, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc007ecfee0, 0xc0030c6c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881800)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc007ecfee0, 0xc005881800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004dc8a20, 0xc00a96ac20, 0x6387d00, 0xc007ecfee0, 0xc005881800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:50.085210  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.085418  124311 wrap.go:47] GET /healthz: (1.255871ms) 500
goroutine 17472 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0056d7500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0056d7500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003378a60, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0001e1f98, 0xc0070fea00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b100)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0001e1f98, 0xc004e3b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004e62f00, 0xc00a96ac20, 0x6387d00, 0xc0001e1f98, 0xc004e3b100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.095213  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.153772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.095545  124311 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0315 00:24:50.114649  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.534711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.116829  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.656161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.135484  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.339114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.135766  124311 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0315 00:24:50.154546  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.462736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.156515  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.427432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.174611  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.174834  124311 wrap.go:47] GET /healthz: (1.498202ms) 500
goroutine 17452 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003c80d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003c80d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003437ae0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc002e11630, 0xc0070fef00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b600)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc002e11630, 0xc003c7b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003c78ae0, 0xc00a96ac20, 0x6387d00, 0xc002e11630, 0xc003c7b600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:50.175138  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.945532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.175404  124311 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0315 00:24:50.185064  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.185272  124311 wrap.go:47] GET /healthz: (1.189587ms) 500
goroutine 17557 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0056d7ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0056d7ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003471880, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000b42348, 0xc002889400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000b42348, 0xc000c34700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000b42348, 0xc000c34600)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000b42348, 0xc000c34600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004e638c0, 0xc00a96ac20, 0x6387d00, 0xc000b42348, 0xc000c34600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.194184  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.129593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.195976  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.296662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.215143  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.073765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.215429  124311 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0315 00:24:50.234589  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.462759ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.236525  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.421426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.255311  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.18003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.255613  124311 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0315 00:24:50.274381  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.274564  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.470177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.274620  124311 wrap.go:47] GET /healthz: (1.251137ms) 500
goroutine 17528 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0054e90a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0054e90a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003361040, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a3f888, 0xc0055afb80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c200)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a3f888, 0xc006f8c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005177140, 0xc00a96ac20, 0x6387d00, 0xc000a3f888, 0xc006f8c200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:50.276345  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.355368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.285514  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.285706  124311 wrap.go:47] GET /healthz: (1.385674ms) 500
goroutine 17579 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0029a3e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0029a3e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003651060, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a9ef10, 0xc009e4c000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50400)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a9ef10, 0xc003e50400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001408c60, 0xc00a96ac20, 0x6387d00, 0xc000a9ef10, 0xc003e50400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.295202  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.165034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.295538  124311 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0315 00:24:50.314517  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.453416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.316480  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.400472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.335240  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.175137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.335596  124311 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0315 00:24:50.354745  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.612048ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.356698  124311 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.490994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.374556  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.374725  124311 wrap.go:47] GET /healthz: (1.316028ms) 500
goroutine 17607 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc006fab960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc006fab960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003864700, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000b427a0, 0xc009e4c3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000b427a0, 0xc002baef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000b427a0, 0xc002baee00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000b427a0, 0xc002baee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0010d3920, 0xc00a96ac20, 0x6387d00, 0xc000b427a0, 0xc002baee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:50.375811  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.721058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.376044  124311 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0315 00:24:50.385047  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.385239  124311 wrap.go:47] GET /healthz: (1.117723ms) 500
goroutine 17624 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0054e9f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0054e9f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038dbc40, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a3fbf0, 0xc0030c7040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a3fbf0, 0xc005aa0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a3fbf0, 0xc006f8df00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a3fbf0, 0xc006f8df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00485c300, 0xc00a96ac20, 0x6387d00, 0xc000a3fbf0, 0xc006f8df00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.394341  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.271586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.396183  124311 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.35716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.415291  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.258977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.415556  124311 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0315 00:24:50.434530  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.429005ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.436300  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.321504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.455513  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.354361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.455772  124311 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0315 00:24:50.474326  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.474537  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.421025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.474596  124311 wrap.go:47] GET /healthz: (1.175761ms) 500
goroutine 17682 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc006ab2ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc006ab2ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00395ecc0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a3fe30, 0xc0070ff2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1900)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a3fe30, 0xc005aa1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00485d020, 0xc00a96ac20, 0x6387d00, 0xc000a3fe30, 0xc005aa1900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57722]
I0315 00:24:50.476315  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.276547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.485203  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.485387  124311 wrap.go:47] GET /healthz: (1.257152ms) 500
goroutine 17600 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003a20bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003a20bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006e472c0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0005d7c08, 0xc002092c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0005d7c08, 0xc002798000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0005d7c08, 0xc002433f00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0005d7c08, 0xc002433f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002282540, 0xc00a96ac20, 0x6387d00, 0xc0005d7c08, 0xc002433f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.495265  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.247852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.495632  124311 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0315 00:24:50.514543  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.443843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.516312  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.267151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.535727  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.617431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.536016  124311 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0315 00:24:50.554627  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.573979ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.556718  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.532435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.574401  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.574606  124311 wrap.go:47] GET /healthz: (1.216362ms) 500
goroutine 17584 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005f12d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005f12d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038f4ce0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc000a9f0e0, 0xc002093040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc000a9f0e0, 0xc006db4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc000a9f0e0, 0xc003e51f00)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc000a9f0e0, 0xc003e51f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001409860, 0xc00a96ac20, 0x6387d00, 0xc000a9f0e0, 0xc003e51f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:57724]
I0315 00:24:50.575001  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.872721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.575227  124311 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0315 00:24:50.585023  124311 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 00:24:50.585203  124311 wrap.go:47] GET /healthz: (1.140605ms) 500
goroutine 17241 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009bb19d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009bb19d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002e553a0, 0x1f4)
net/http.Error(0x7fa6e6e74da8, 0xc0083a7908, 0xc001ebf040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
net/http.HandlerFunc.ServeHTTP(0xc00577a7e0, 0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00562c980, 0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00a309960, 0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x426c2a0, 0xe, 0xc00a295e60, 0xc00a309960, 0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
net/http.HandlerFunc.ServeHTTP(0xc008528840, 0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
net/http.HandlerFunc.ServeHTTP(0xc00a966f00, 0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
net/http.HandlerFunc.ServeHTTP(0xc008528880, 0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1dea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1de900)
net/http.HandlerFunc.ServeHTTP(0xc00a968550, 0x7fa6e6e74da8, 0xc0083a7908, 0xc00a1de900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009d8de00, 0xc00a96ac20, 0x6387d00, 0xc0083a7908, 0xc00a1de900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.596486  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.670731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.603034  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (6.097806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.614910  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.854802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.615198  124311 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0315 00:24:50.637994  124311 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (4.947365ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.641010  124311 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.647491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.654879  124311 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.826071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.655197  124311 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0315 00:24:50.674734  124311 wrap.go:47] GET /healthz: (1.267917ms) 200 [Go-http-client/1.1 127.0.0.1:57722]
W0315 00:24:50.675554  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 00:24:50.675606  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 00:24:50.675646  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 00:24:50.675665  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 00:24:50.675680  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 00:24:50.675697  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 00:24:50.675711  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 00:24:50.675734  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 00:24:50.675751  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 00:24:50.675766  124311 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0315 00:24:50.675830  124311 factory.go:331] Creating scheduler from algorithm provider 'DefaultProvider'
I0315 00:24:50.675847  124311 factory.go:412] Creating scheduler with fit predicates 'map[MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} CheckNodePIDPressure:{} NoVolumeZoneConflict:{} MatchInterPodAffinity:{} CheckNodeCondition:{} NoDiskConflict:{} GeneralPredicates:{} CheckNodeMemoryPressure:{} CheckNodeDiskPressure:{} PodToleratesNodeTaints:{} CheckVolumeBinding:{}]' and priority functions 'map[InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{}]'
I0315 00:24:50.676056  124311 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0315 00:24:50.676324  124311 reflector.go:123] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:211
I0315 00:24:50.676350  124311 reflector.go:161] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:211
I0315 00:24:50.677361  124311 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (694.163µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:24:50.678474  124311 get.go:251] Starting watch for /api/v1/pods, rv=19281 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m26s
I0315 00:24:50.691286  124311 wrap.go:47] GET /healthz: (1.398135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.693111  124311 wrap.go:47] GET /api/v1/namespaces/default: (1.357438ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.695773  124311 wrap.go:47] POST /api/v1/namespaces: (2.021763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.697338  124311 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.116645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.701098  124311 wrap.go:47] POST /api/v1/namespaces/default/services: (3.254786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.702412  124311 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (942.141µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.704256  124311 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.461529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.776278  124311 shared_informer.go:123] caches populated
I0315 00:24:50.776383  124311 controller_utils.go:1034] Caches are synced for scheduler controller
I0315 00:24:50.776745  124311 reflector.go:123] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.776774  124311 reflector.go:161] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.776809  124311 reflector.go:123] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.776839  124311 reflector.go:161] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.776955  124311 reflector.go:123] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.776970  124311 reflector.go:161] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.777018  124311 reflector.go:123] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.777038  124311 reflector.go:161] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.776760  124311 reflector.go:123] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.777099  124311 reflector.go:161] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.777150  124311 reflector.go:123] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.777164  124311 reflector.go:161] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.778030  124311 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (736.114µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:24:50.778054  124311 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (398.176µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57928]
I0315 00:24:50.778084  124311 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (334.537µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57930]
I0315 00:24:50.778084  124311 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (529.497µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57926]
I0315 00:24:50.778467  124311 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (872.518µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57932]
I0315 00:24:50.778671  124311 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=19287 labels= fields= timeout=8m8s
I0315 00:24:50.778762  124311 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=19282 labels= fields= timeout=9m31s
I0315 00:24:50.779092  124311 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=19287 labels= fields= timeout=8m9s
I0315 00:24:50.779141  124311 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=19279 labels= fields= timeout=5m4s
I0315 00:24:50.779097  124311 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=19287 labels= fields= timeout=6m6s
I0315 00:24:50.779363  124311 reflector.go:123] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.776843  124311 reflector.go:123] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.779407  124311 reflector.go:161] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.779437  124311 reflector.go:123] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.779455  124311 reflector.go:161] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.779384  124311 reflector.go:161] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0315 00:24:50.780147  124311 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (370.1µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57936]
I0315 00:24:50.780319  124311 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (413.207µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57938]
I0315 00:24:50.780389  124311 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (479.101µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57940]
I0315 00:24:50.780753  124311 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (391.19µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57936]
I0315 00:24:50.780852  124311 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=19286 labels= fields= timeout=8m13s
I0315 00:24:50.781007  124311 get.go:251] Starting watch for /api/v1/services, rv=19515 labels= fields= timeout=5m25s
I0315 00:24:50.781083  124311 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=19279 labels= fields= timeout=7m35s
I0315 00:24:50.781277  124311 get.go:251] Starting watch for /api/v1/nodes, rv=19281 labels= fields= timeout=6m12s
I0315 00:24:50.878723  124311 shared_informer.go:123] caches populated
I0315 00:24:50.979044  124311 shared_informer.go:123] caches populated
I0315 00:24:51.079275  124311 shared_informer.go:123] caches populated
I0315 00:24:51.179573  124311 shared_informer.go:123] caches populated
I0315 00:24:51.279845  124311 shared_informer.go:123] caches populated
I0315 00:24:51.380037  124311 shared_informer.go:123] caches populated
I0315 00:24:51.482603  124311 shared_informer.go:123] caches populated
I0315 00:24:51.582804  124311 shared_informer.go:123] caches populated
I0315 00:24:51.683037  124311 shared_informer.go:123] caches populated
I0315 00:24:51.778639  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:51.778830  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:51.780919  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:51.781048  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:51.781212  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:51.783258  124311 shared_informer.go:123] caches populated
I0315 00:24:51.787032  124311 wrap.go:47] POST /api/v1/nodes: (3.303657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:51.789827  124311 wrap.go:47] PUT /api/v1/nodes/testnode/status: (2.321702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:51.792650  124311 wrap.go:47] POST /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods: (2.104078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:51.792915  124311 scheduling_queue.go:908] About to try and schedule pod node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pidpressure-fake-name
I0315 00:24:51.792939  124311 scheduler.go:453] Attempting to schedule pod: node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pidpressure-fake-name
I0315 00:24:51.793062  124311 scheduler_binder.go:269] AssumePodVolumes for pod "node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pidpressure-fake-name", node "testnode"
I0315 00:24:51.793084  124311 scheduler_binder.go:279] AssumePodVolumes for pod "node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0315 00:24:51.793145  124311 factory.go:733] Attempting to bind pidpressure-fake-name to testnode
I0315 00:24:51.795374  124311 wrap.go:47] POST /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name/binding: (1.93384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:51.795611  124311 scheduler.go:572] pod node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pidpressure-fake-name is bound successfully on node testnode, 1 nodes evaluated, 1 nodes were found feasible
I0315 00:24:51.797707  124311 wrap.go:47] POST /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/events: (1.780463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:51.895032  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.660151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:51.995410  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.9462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.104239  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (10.460982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.195297  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.900535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.295000  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.535981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.395639  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.103096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.495610  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.150139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.595612  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.166762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.695616  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.180572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.778842  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:52.778993  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:52.781734  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:52.781754  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:52.781738  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:52.795401  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.918344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.895604  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.191821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:52.995540  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.046743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.095218  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.909553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.195472  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.091728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.295369  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.957715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.395321  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.935099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.495599  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.177875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.595444  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.060823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.695048  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.672245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.779000  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:53.779140  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:53.782006  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:53.782035  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:53.782035  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:53.795302  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.909405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.895376  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.962052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:53.995376  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.928494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.095527  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.024506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.195544  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.053393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.295524  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.064466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.395517  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.908892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.495254  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.857043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.595565  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.009197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.695339  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.914219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.779253  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:54.779343  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:54.782228  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:54.782242  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:54.782228  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:54.795529  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.049894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.895437  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.098494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:54.995385  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.93517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.095747  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.319645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.195378  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.942405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.295371  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.979542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.395309  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.906762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.495402  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.882465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.595426  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.96485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.695392  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.013549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.779453  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:55.779537  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:55.782452  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:55.782475  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:55.782450  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:55.795397  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.964265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.895316  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.950023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:55.995393  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.948398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.095327  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.960127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.195243  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.879426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.295193  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.821732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.395372  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.974984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.495716  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.270377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.595349  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.924678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.695352  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.988133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.779628  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:56.779636  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:56.782656  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:56.782817  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:56.782856  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:56.795336  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.937639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.894894  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.523371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:56.995229  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.830874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.095233  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.827794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.195165  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.847146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.295385  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.894144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.395203  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.855475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.495120  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.727742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.595012  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.600729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.695559  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.209893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.779836  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:57.779844  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:57.782892  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:57.782971  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:57.783080  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:57.795325  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.959361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.895069  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.687878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:57.995042  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.667213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.094975  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.61331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.195014  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.658414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.295238  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.823191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.395725  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.658633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.495022  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.594713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.595093  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.755898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.695138  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.730635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.780039  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:58.780040  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:58.783102  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:58.783102  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:58.783246  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:58.795347  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.935975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.895056  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.691996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:58.995161  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.784479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.095054  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.68793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.195247  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.828814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.295109  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.705753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.395278  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.810797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.495029  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.602893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.595224  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.694813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.695098  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.725145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.780419  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:59.780448  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:59.783365  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:59.783363  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:59.783365  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:24:59.794900  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.512819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.895096  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.738671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:24:59.995112  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.706413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.095312  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.927205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.195291  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.921663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.295194  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.784531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.395078  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.655826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.495085  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.720194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.595212  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.866149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.694246  124311 wrap.go:47] GET /api/v1/namespaces/default: (2.203288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.695608  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.717335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58698]
I0315 00:25:00.697532  124311 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.800752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.699233  124311 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.348658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.781253  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:00.781309  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:00.783698  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:00.783732  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:00.783808  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:00.795044  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.686122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.895183  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.742517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:00.996101  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.094822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.095487  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.042036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.195316  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.028926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.295454  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.986557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.395340  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.963543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.495556  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.155311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.596932  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.916366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.695598  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.225389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.781445  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:01.781513  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:01.783893  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:01.783900  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:01.783901  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:01.795529  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.108938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.897022  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.029519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:01.995561  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.114363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.095465  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.076767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.195312  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.941158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.295294  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.886091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.395357  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.827555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.495338  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.921636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.595328  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.977187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.695313  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.941744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.781598  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:02.781597  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:02.784125  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:02.784153  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:02.784165  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:02.795333  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.961712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.895041  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.649102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:02.995575  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.194773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.095417  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.854279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.195227  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.862763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.295334  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.905433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.395697  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.377956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.495078  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.718172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.597773  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.865766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.695241  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.863187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.781732  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:03.781785  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:03.784318  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:03.784331  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:03.784318  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:03.798373  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (5.019425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.895265  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.872651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:03.995847  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.394953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.095141  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.827106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.195256  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.616167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.295550  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.106816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.395472  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.111894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.495404  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.028013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.595421  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.981775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.695996  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.606682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.781897  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:04.781944  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:04.784544  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:04.784599  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:04.784676  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:04.795161  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.847234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.895075  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.719105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:04.995017  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.62979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.095087  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.765537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.195338  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.002551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.295044  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.727542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.395184  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.866047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.495424  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.024005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.595340  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.945872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.695257  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.773155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.782073  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:05.782118  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:05.784730  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:05.784773  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:05.784825  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:05.795187  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.742024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.895539  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.17852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:05.995698  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.191457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.095314  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.86412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.195381  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.83997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.295555  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.972236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.395523  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.06145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.495580  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.971061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.595473  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.875372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.695595  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.169464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.782297  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:06.782314  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:06.784964  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:06.784985  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:06.785089  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:06.795473  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.925129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.895349  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.90923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:06.995364  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.896646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.095618  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.142381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.195198  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.798947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.295658  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.168804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.395289  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.820215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.495875  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.056968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.595706  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.224889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.695487  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.0342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.782527  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:07.782527  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:07.785300  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:07.785322  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:07.785372  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:07.795523  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.005136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.896785  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.252684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:07.995829  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.241125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.095552  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.075083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.195687  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.235284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.295488  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.981212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.395979  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.435296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.495474  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.978026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.595722  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.123403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.695794  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.923307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.782778  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:08.782778  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:08.785634  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:08.785660  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:08.785635  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:08.795934  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.126299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.895697  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.218837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:08.995676  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.094292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.095582  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.10921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.195829  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.376743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.295597  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.034289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.395813  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.313534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.496278  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.619369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.595671  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.047324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.695540  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.143245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.783039  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:09.783091  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:09.785853  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:09.785853  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:09.785871  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:09.795827  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.250395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.895726  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.100086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:09.995678  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.151851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.095537  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.070804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.195659  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.112158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.295765  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.297672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.395561  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.118165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.495615  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.080728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.595615  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.06823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.694064  124311 wrap.go:47] GET /api/v1/namespaces/default: (1.76784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.694941  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.590772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58698]
I0315 00:25:10.695897  124311 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.213012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.697612  124311 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.234269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.783279  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:10.783289  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:10.786112  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:10.786234  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:10.786274  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:10.796239  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.706997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.895572  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.014393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:10.995633  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.108413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.095484  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.012519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.195588  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.139898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.295183  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.674433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.396589  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.243771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.495187  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.720873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.595401  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.010974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.695213  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.803009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.783546  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:11.783595  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:11.786368  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:11.786536  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:11.786668  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:11.798202  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (3.554868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.895830  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.942784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:11.997513  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (3.161566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.095449  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.945514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.195228  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.795103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.295205  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.786865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.396209  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.478939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.496043  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.319686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.595471  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.893707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.695485  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.01212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.783766  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:12.783815  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:12.786580  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:12.786754  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:12.786890  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:12.795338  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.913291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.895791  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.406897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:12.995707  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.919412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.095187  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.833432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.195102  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.780669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.296695  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.905599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.395223  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.778465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.495447  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.079587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.595250  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.862602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.695049  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.634117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.784033  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:13.784032  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:13.786853  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:13.786982  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:13.787073  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:13.795263  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.896469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.895675  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.175799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:13.995806  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.321337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.095578  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.024868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.195599  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.021368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.298837  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.909372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.395297  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.970165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.495996  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.402673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.596045  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.188217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.695482  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.016016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.784311  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:14.784327  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:14.787122  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:14.787185  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:14.787185  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:14.795886  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.268542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.895571  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.086969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:14.995780  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.166149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.095651  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.070802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.195458  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.001993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.295680  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.238708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.395692  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.119036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.495438  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.023161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.595399  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.021517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.695512  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.064773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.784528  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:15.784528  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:15.787406  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:15.787445  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:15.787419  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:15.795431  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.062082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.895242  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.857085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:15.995547  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.063824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.095808  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.039973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.195470  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.072114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.295385  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.945068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.395472  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.999995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.495541  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.086381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.595220  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.847419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.696523  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.063212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.784745  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:16.784745  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:16.787625  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:16.787655  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:16.787629  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:16.796841  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.946278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.895354  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.969966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:16.995476  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.054134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.095627  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.192933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.195260  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.94513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.295342  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.953626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.395437  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.047479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.495470  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.031702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.595459  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.007074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.695392  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.073289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.784955  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:17.784979  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:17.787740  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:17.787826  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:17.787851  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:17.795257  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.87711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.895290  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.942754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:17.995242  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.866763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:18.095328  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.027637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:18.195425  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.092274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:18.295276  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.937649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:18.395250  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.899023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:18.496197  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.241642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:18.595588  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.031515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:18.695277  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.857779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:18.785176  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:18.785239  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:18.787948  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:18.787985  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:18.788195  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:18.795407  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.013792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:18.901306  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (4.167626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.004203  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (10.803994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.095368  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.963732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.195300  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.970271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.295633  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.138734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.395407  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.04941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.495372  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.975338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.595628  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.209609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.695590  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.048894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.785372  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:19.785374  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:19.788102  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:19.788102  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:19.788345  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:19.795771  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.387802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.895095  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.806778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:19.995548  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.222765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.095139  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.832065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.195298  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.891377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.295600  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.143171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.395391  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.003541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.495364  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.888681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.595321  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.965839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.694217  124311 wrap.go:47] GET /api/v1/namespaces/default: (1.759826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.694879  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.62583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58698]
I0315 00:25:20.696713  124311 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.807933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.698927  124311 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.672759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.785697  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:20.785753  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:20.788214  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:20.788292  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:20.788528  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:20.795528  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.162916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.895450  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.066312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:20.996618  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.93274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.095702  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.043157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.195362  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.015482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.295597  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.206358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.396275  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (2.18053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.495073  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.782038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.596876  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.985314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.695281  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.925671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.785902  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:21.785902  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:21.788315  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:21.788439  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:21.788680  124311 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0315 00:25:21.795214  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.860073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.797238  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.548408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.802293  124311 wrap.go:47] DELETE /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (4.622172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.805564  124311 wrap.go:47] GET /api/v1/namespaces/node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pods/pidpressure-fake-name: (1.213328ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.806300  124311 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=19279&timeout=5m4s&timeoutSeconds=304&watch=true: (31.027365561s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57932]
E0315 00:25:21.806316  124311 scheduling_queue.go:911] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0315 00:25:21.806372  124311 wrap.go:47] GET /api/v1/nodes?resourceVersion=19281&timeout=6m12s&timeoutSeconds=372&watch=true: (31.025319368s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57936]
I0315 00:25:21.806307  124311 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=19287&timeout=8m9s&timeoutSeconds=489&watch=true: (31.02741142s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57926]
I0315 00:25:21.806545  124311 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=19287&timeout=6m6s&timeoutSeconds=366&watch=true: (31.027617726s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57934]
I0315 00:25:21.806553  124311 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=19286&timeout=8m13s&timeoutSeconds=493&watch=true: (31.025876652s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57938]
I0315 00:25:21.806611  124311 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=19281&timeoutSeconds=566&watch=true: (31.128531208s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57722]
I0315 00:25:21.806674  124311 wrap.go:47] GET /api/v1/services?resourceVersion=19515&timeout=5m25s&timeoutSeconds=325&watch=true: (31.025889024s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57940]
I0315 00:25:21.806753  124311 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=19279&timeout=7m35s&timeoutSeconds=455&watch=true: (31.025938677s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57944]
I0315 00:25:21.806801  124311 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=19287&timeout=8m8s&timeoutSeconds=488&watch=true: (31.028382703s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57724]
I0315 00:25:21.806864  124311 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=19282&timeout=9m31s&timeoutSeconds=571&watch=true: (31.028443246s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57928]
I0315 00:25:21.810297  124311 wrap.go:47] DELETE /api/v1/nodes: (3.650251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.810513  124311 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0315 00:25:21.812149  124311 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.401887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
I0315 00:25:21.814364  124311 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.804893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57942]
predicates_test.go:918: Test Failed: error, timed out waiting for the condition, while waiting for scheduled
				from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190315-001939.xml

Find node-pid-pressurebf2bb196-46b8-11e9-bbcf-0242ac110002/pidpressure-fake-name mentions in log files | View test history on testgrid


Show 648 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 301 lines ...
W0315 00:13:41.588] I0315 00:13:41.587206   55707 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0315 00:13:41.588] I0315 00:13:41.587297   55707 server.go:559] external host was not specified, using 172.17.0.2
W0315 00:13:41.589] W0315 00:13:41.587310   55707 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0315 00:13:41.589] I0315 00:13:41.587555   55707 server.go:146] Version: v1.15.0-alpha.0.1218+dfa25fcc7722d2
W0315 00:13:41.854] I0315 00:13:41.853878   55707 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0315 00:13:41.855] I0315 00:13:41.853907   55707 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0315 00:13:41.855] E0315 00:13:41.854410   55707 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:41.855] E0315 00:13:41.854444   55707 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:41.855] E0315 00:13:41.854477   55707 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:41.855] E0315 00:13:41.854586   55707 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:41.856] E0315 00:13:41.854612   55707 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:41.856] E0315 00:13:41.854625   55707 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:41.856] I0315 00:13:41.854641   55707 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0315 00:13:41.856] I0315 00:13:41.854646   55707 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0315 00:13:41.856] I0315 00:13:41.856231   55707 clientconn.go:551] parsed scheme: ""
W0315 00:13:41.856] I0315 00:13:41.856252   55707 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 00:13:41.857] I0315 00:13:41.856320   55707 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 00:13:41.857] I0315 00:13:41.856913   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0315 00:13:42.400] W0315 00:13:42.400188   55707 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0315 00:13:42.853] I0315 00:13:42.852687   55707 clientconn.go:551] parsed scheme: ""
W0315 00:13:42.854] I0315 00:13:42.853162   55707 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 00:13:42.854] I0315 00:13:42.853438   55707 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 00:13:42.854] I0315 00:13:42.853749   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:13:42.855] I0315 00:13:42.854557   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:13:43.248] E0315 00:13:43.247486   55707 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:43.248] E0315 00:13:43.247584   55707 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:43.249] E0315 00:13:43.247669   55707 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:43.249] E0315 00:13:43.247719   55707 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:43.249] E0315 00:13:43.247761   55707 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:43.249] E0315 00:13:43.247801   55707 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 00:13:43.250] I0315 00:13:43.247847   55707 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0315 00:13:43.250] I0315 00:13:43.247861   55707 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0315 00:13:43.250] I0315 00:13:43.249426   55707 clientconn.go:551] parsed scheme: ""
W0315 00:13:43.251] I0315 00:13:43.249452   55707 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 00:13:43.251] I0315 00:13:43.249527   55707 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 00:13:43.251] I0315 00:13:43.249640   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 267 lines ...
W0315 00:14:19.862] I0315 00:14:19.738844   59005 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
W0315 00:14:19.862] I0315 00:14:19.738903   59005 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
W0315 00:14:19.862] I0315 00:14:19.739025   59005 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
W0315 00:14:19.862] I0315 00:14:19.739050   59005 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
W0315 00:14:19.863] I0315 00:14:19.739075   59005 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
W0315 00:14:19.863] I0315 00:14:19.739106   59005 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
W0315 00:14:19.863] E0315 00:14:19.739178   59005 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0315 00:14:19.863] I0315 00:14:19.739212   59005 controllermanager.go:497] Started "resourcequota"
W0315 00:14:19.863] I0315 00:14:19.739350   59005 resource_quota_controller.go:276] Starting resource quota controller
W0315 00:14:19.863] I0315 00:14:19.739430   59005 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W0315 00:14:19.863] I0315 00:14:19.739546   59005 resource_quota_monitor.go:301] QuotaMonitor running
W0315 00:14:19.863] I0315 00:14:19.740117   59005 controllermanager.go:497] Started "horizontalpodautoscaling"
W0315 00:14:19.864] I0315 00:14:19.740259   59005 horizontal.go:156] Starting HPA controller
... skipping 48 lines ...
W0315 00:14:19.869] I0315 00:14:19.756442   59005 daemon_controller.go:267] Starting daemon sets controller
W0315 00:14:19.869] I0315 00:14:19.756549   59005 controller_utils.go:1027] Waiting for caches to sync for daemon sets controller
W0315 00:14:19.869] I0315 00:14:19.757011   59005 controllermanager.go:497] Started "job"
W0315 00:14:19.869] W0315 00:14:19.757040   59005 controllermanager.go:476] "bootstrapsigner" is disabled
W0315 00:14:19.869] I0315 00:14:19.757513   59005 job_controller.go:143] Starting job controller
W0315 00:14:19.870] I0315 00:14:19.757582   59005 controller_utils.go:1027] Waiting for caches to sync for job controller
W0315 00:14:19.870] E0315 00:14:19.758929   59005 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0315 00:14:19.870] W0315 00:14:19.758972   59005 controllermanager.go:489] Skipping "service"
W0315 00:14:19.870] I0315 00:14:19.759472   59005 node_lifecycle_controller.go:77] Sending events to api server
W0315 00:14:19.870] E0315 00:14:19.759581   59005 core.go:161] failed to start cloud node lifecycle controller: no cloud provider provided
W0315 00:14:19.870] W0315 00:14:19.759593   59005 controllermanager.go:489] Skipping "cloud-node-lifecycle"
W0315 00:14:19.870] I0315 00:14:19.760047   59005 controllermanager.go:497] Started "pv-protection"
W0315 00:14:19.870] I0315 00:14:19.760187   59005 pv_protection_controller.go:81] Starting PV protection controller
W0315 00:14:19.871] I0315 00:14:19.760206   59005 controller_utils.go:1027] Waiting for caches to sync for PV protection controller
W0315 00:14:19.912] I0315 00:14:19.911692   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:14:19.912] I0315 00:14:19.911848   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
... skipping 4 lines ...
W0315 00:14:20.168] I0315 00:14:20.168026   59005 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W0315 00:14:20.169] I0315 00:14:20.168415   59005 controllermanager.go:497] Started "cronjob"
W0315 00:14:20.169] I0315 00:14:20.168431   59005 cronjob_controller.go:94] Starting CronJob Manager
W0315 00:14:20.169] I0315 00:14:20.169166   59005 controllermanager.go:497] Started "pvc-protection"
W0315 00:14:20.171] I0315 00:14:20.171336   59005 pvc_protection_controller.go:99] Starting PVC protection controller
W0315 00:14:20.172] I0315 00:14:20.171358   59005 controller_utils.go:1027] Waiting for caches to sync for PVC protection controller
W0315 00:14:20.206] W0315 00:14:20.205368   59005 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0315 00:14:20.230] I0315 00:14:20.229510   59005 controller_utils.go:1034] Caches are synced for ReplicaSet controller
W0315 00:14:20.241] I0315 00:14:20.240554   59005 controller_utils.go:1034] Caches are synced for HPA controller
W0315 00:14:20.242] I0315 00:14:20.242074   59005 controller_utils.go:1034] Caches are synced for endpoint controller
W0315 00:14:20.242] I0315 00:14:20.242302   59005 controller_utils.go:1034] Caches are synced for deployment controller
W0315 00:14:20.243] I0315 00:14:20.243158   59005 controller_utils.go:1034] Caches are synced for GC controller
W0315 00:14:20.244] I0315 00:14:20.244015   59005 controller_utils.go:1034] Caches are synced for taint controller
... skipping 2 lines ...
W0315 00:14:20.245] I0315 00:14:20.244405   59005 taint_manager.go:198] Starting NoExecuteTaintManager
W0315 00:14:20.245] I0315 00:14:20.244676   59005 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"471131aa-46b7-11e9-9adf-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0315 00:14:20.252] I0315 00:14:20.251545   59005 controller_utils.go:1034] Caches are synced for namespace controller
W0315 00:14:20.254] I0315 00:14:20.253611   59005 controller_utils.go:1034] Caches are synced for TTL controller
W0315 00:14:20.272] I0315 00:14:20.271633   59005 controller_utils.go:1034] Caches are synced for PVC protection controller
W0315 00:14:20.343] I0315 00:14:20.343047   59005 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W0315 00:14:20.357] E0315 00:14:20.356908   59005 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0315 00:14:20.359] E0315 00:14:20.358783   59005 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0315 00:14:20.373] E0315 00:14:20.372315   59005 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0315 00:14:20.378] E0315 00:14:20.377862   59005 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0315 00:14:20.385] E0315 00:14:20.385178   59005 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0315 00:14:20.429] I0315 00:14:20.428989   59005 controller_utils.go:1034] Caches are synced for service account controller
W0315 00:14:20.432] I0315 00:14:20.431879   55707 controller.go:606] quota admission added evaluator for: serviceaccounts
W0315 00:14:20.532] I0315 00:14:20.532135   59005 controller_utils.go:1034] Caches are synced for ReplicationController controller
W0315 00:14:20.553] I0315 00:14:20.553236   59005 controller_utils.go:1034] Caches are synced for disruption controller
W0315 00:14:20.554] I0315 00:14:20.553275   59005 disruption.go:294] Sending events to api server.
W0315 00:14:20.631] I0315 00:14:20.630564   59005 controller_utils.go:1034] Caches are synced for persistent volume controller
... skipping 52 lines ...
I0315 00:14:21.394] +++ [0315 00:14:21] Creating namespace namespace-1552608861-31199
I0315 00:14:21.484] namespace/namespace-1552608861-31199 created
I0315 00:14:21.556] Context "test" modified.
I0315 00:14:21.564] +++ [0315 00:14:21] Testing kubectl(v1:config set)
I0315 00:14:21.645] Cluster "test-cluster" set.
I0315 00:14:21.718] Property "clusters.test-cluster.certificate-authority-data" set.
W0315 00:14:21.818] E0315 00:14:21.434665   59005 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0315 00:14:21.819] I0315 00:14:21.662434   59005 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W0315 00:14:21.819] I0315 00:14:21.762900   59005 controller_utils.go:1034] Caches are synced for garbage collector controller
W0315 00:14:21.913] I0315 00:14:21.912601   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:14:21.913] I0315 00:14:21.912862   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:14:22.014] Property "clusters.test-cluster.certificate-authority-data" set.
I0315 00:14:22.014] +++ exit code: 0
... skipping 33 lines ...
I0315 00:14:24.078] +++ [0315 00:14:24] Creating namespace namespace-1552608864-10397
I0315 00:14:24.152] namespace/namespace-1552608864-10397 created
I0315 00:14:24.227] Context "test" modified.
I0315 00:14:24.233] +++ [0315 00:14:24] Testing RESTMapper
W0315 00:14:24.334] I0315 00:14:23.913839   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:14:24.334] I0315 00:14:23.914058   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:14:24.435] +++ [0315 00:14:24] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0315 00:14:24.435] +++ exit code: 0
I0315 00:14:24.491] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0315 00:14:24.491] bindings                                                                      true         Binding
I0315 00:14:24.492] componentstatuses                 cs                                          false        ComponentStatus
I0315 00:14:24.492] configmaps                        cm                                          true         ConfigMap
I0315 00:14:24.492] endpoints                         ep                                          true         Endpoints
... skipping 677 lines ...
I0315 00:14:43.174] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 00:14:43.357] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 00:14:43.456] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 00:14:43.549] (Bpod "valid-pod" force deleted
W0315 00:14:43.650] I0315 00:14:42.924784   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:14:43.650] I0315 00:14:42.925056   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:14:43.651] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0315 00:14:43.651] error: setting 'all' parameter but found a non empty selector. 
W0315 00:14:43.651] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0315 00:14:43.751] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0315 00:14:43.765] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0315 00:14:43.849] (Bnamespace/test-kubectl-describe-pod created
I0315 00:14:43.954] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0315 00:14:44.058] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
W0315 00:14:45.195] I0315 00:14:44.926191   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:14:45.296] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0315 00:14:45.296] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0315 00:14:45.396] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0315 00:14:45.577] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:14:45.787] (Bpod/env-test-pod created
W0315 00:14:45.887] error: min-available and max-unavailable cannot be both specified
W0315 00:14:45.927] I0315 00:14:45.926620   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:14:45.928] I0315 00:14:45.926862   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:14:46.028] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0315 00:14:46.028] Name:               env-test-pod
I0315 00:14:46.028] Namespace:          test-kubectl-describe-pod
I0315 00:14:46.029] Priority:           0
... skipping 173 lines ...
W0315 00:14:59.419] I0315 00:14:58.935742   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:14:59.419] I0315 00:14:58.972090   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552608892-18524", Name:"modified", UID:"5e7ba12a-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-mh6w6
I0315 00:14:59.570] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:14:59.722] (Bpod/valid-pod created
I0315 00:14:59.817] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 00:14:59.969] (BSuccessful
I0315 00:14:59.969] message:Error from server: cannot restore map from string
I0315 00:14:59.970] has:cannot restore map from string
I0315 00:15:00.056] Successful
I0315 00:15:00.057] message:pod/valid-pod patched (no change)
I0315 00:15:00.057] has:patched (no change)
I0315 00:15:00.141] pod/valid-pod patched
I0315 00:15:00.234] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 4 lines ...
I0315 00:15:00.697] core.sh:465: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0315 00:15:00.780] (Bpod/valid-pod patched
I0315 00:15:00.884] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0315 00:15:00.960] (Bpod/valid-pod patched
W0315 00:15:01.061] I0315 00:14:59.936051   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:01.061] I0315 00:14:59.936262   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:01.061] E0315 00:14:59.961068   55707 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
W0315 00:15:01.062] I0315 00:15:00.936621   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:01.062] I0315 00:15:00.936876   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:15:01.162] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0315 00:15:01.245] (Bpod/valid-pod patched
I0315 00:15:01.350] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0315 00:15:01.533] (B+++ [0315 00:15:01] "kubectl patch with resourceVersion 500" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0315 00:15:01.807] pod "valid-pod" deleted
I0315 00:15:01.820] pod/valid-pod replaced
I0315 00:15:01.931] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0315 00:15:02.112] (BSuccessful
I0315 00:15:02.112] message:error: --grace-period must have --force specified
I0315 00:15:02.112] has:\-\-grace-period must have \-\-force specified
W0315 00:15:02.213] I0315 00:15:01.937207   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:02.213] I0315 00:15:01.937487   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:15:02.314] Successful
I0315 00:15:02.314] message:error: --timeout must have --force specified
I0315 00:15:02.314] has:\-\-timeout must have \-\-force specified
I0315 00:15:02.456] node/node-v1-test created
W0315 00:15:02.556] W0315 00:15:02.456104   59005 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0315 00:15:02.657] node/node-v1-test replaced
I0315 00:15:02.737] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0315 00:15:02.818] (Bnode "node-v1-test" deleted
I0315 00:15:02.922] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0315 00:15:03.204] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0315 00:15:04.255] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 30 lines ...
W0315 00:15:05.793] Edit cancelled, no changes made.
W0315 00:15:05.793] Edit cancelled, no changes made.
W0315 00:15:05.793] I0315 00:15:03.938459   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:05.793] I0315 00:15:03.938731   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:05.794] Edit cancelled, no changes made.
W0315 00:15:05.794] Edit cancelled, no changes made.
W0315 00:15:05.794] error: 'name' already has a value (valid-pod), and --overwrite is false
W0315 00:15:05.794] I0315 00:15:04.938999   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:05.794] I0315 00:15:04.939267   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:05.794] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0315 00:15:05.895] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0315 00:15:05.899] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0315 00:15:05.982] (Bpod "redis-master" deleted
... skipping 86 lines ...
I0315 00:15:12.451] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0315 00:15:12.454] +++ working dir: /go/src/k8s.io/kubernetes
I0315 00:15:12.457] +++ command: run_kubectl_create_error_tests
I0315 00:15:12.471] +++ [0315 00:15:12] Creating namespace namespace-1552608912-22394
I0315 00:15:12.544] namespace/namespace-1552608912-22394 created
I0315 00:15:12.612] Context "test" modified.
I0315 00:15:12.620] +++ [0315 00:15:12] Testing kubectl create with error
W0315 00:15:12.721] I0315 00:15:11.943104   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:12.722] I0315 00:15:11.943349   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:12.722] Error: must specify one of -f and -k
W0315 00:15:12.722] 
W0315 00:15:12.722] Create a resource from a file or from stdin.
W0315 00:15:12.722] 
W0315 00:15:12.722]  JSON and YAML formats are accepted.
W0315 00:15:12.722] 
W0315 00:15:12.722] Examples:
... skipping 41 lines ...
W0315 00:15:12.727] 
W0315 00:15:12.727] Usage:
W0315 00:15:12.727]   kubectl create -f FILENAME [options]
W0315 00:15:12.727] 
W0315 00:15:12.727] Use "kubectl <command> --help" for more information about a given command.
W0315 00:15:12.727] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0315 00:15:12.855] +++ [0315 00:15:12] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0315 00:15:12.955] kubectl convert is DEPRECATED and will be removed in a future version.
W0315 00:15:12.956] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0315 00:15:12.956] I0315 00:15:12.943784   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:12.956] I0315 00:15:12.943984   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:15:13.057] +++ exit code: 0
I0315 00:15:13.077] Recording: run_kubectl_apply_tests
... skipping 25 lines ...
W0315 00:15:15.280] I0315 00:15:15.279946   55707 clientconn.go:551] parsed scheme: ""
W0315 00:15:15.281] I0315 00:15:15.279993   55707 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 00:15:15.281] I0315 00:15:15.280063   55707 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 00:15:15.281] I0315 00:15:15.280127   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:15:15.281] I0315 00:15:15.280613   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:15:15.283] I0315 00:15:15.282682   55707 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0315 00:15:15.376] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0315 00:15:15.477] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0315 00:15:15.477] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0315 00:15:15.496] +++ exit code: 0
I0315 00:15:15.556] Recording: run_kubectl_run_tests
I0315 00:15:15.557] Running command: run_kubectl_run_tests
I0315 00:15:15.580] 
... skipping 94 lines ...
I0315 00:15:18.094] Context "test" modified.
I0315 00:15:18.102] +++ [0315 00:15:18] Testing kubectl create filter
I0315 00:15:18.196] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:18.366] (Bpod/selector-test-pod created
I0315 00:15:18.464] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0315 00:15:18.551] (BSuccessful
I0315 00:15:18.551] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0315 00:15:18.551] has:pods "selector-test-pod-dont-apply" not found
I0315 00:15:18.631] pod "selector-test-pod" deleted
I0315 00:15:18.652] +++ exit code: 0
I0315 00:15:18.691] Recording: run_kubectl_apply_deployments_tests
I0315 00:15:18.691] Running command: run_kubectl_apply_deployments_tests
I0315 00:15:18.712] 
... skipping 39 lines ...
I0315 00:15:20.458] (Bapps.sh:131: Successful get deployments my-depl {{.metadata.labels.l2}}: l2
I0315 00:15:20.549] (Bdeployment.extensions "my-depl" deleted
I0315 00:15:20.559] replicaset.extensions "my-depl-64775887d7" deleted
I0315 00:15:20.565] replicaset.extensions "my-depl-656cffcbcc" deleted
I0315 00:15:20.577] pod "my-depl-64775887d7-tlz5f" deleted
I0315 00:15:20.583] pod "my-depl-656cffcbcc-9ntlj" deleted
W0315 00:15:20.684] E0315 00:15:20.580311   59005 replica_set.go:450] Sync "namespace-1552608918-20435/my-depl-64775887d7" failed with replicasets.apps "my-depl-64775887d7" not found
W0315 00:15:20.684] I0315 00:15:20.593629   55707 controller.go:606] quota admission added evaluator for: replicasets.extensions
W0315 00:15:20.684] E0315 00:15:20.599907   59005 replica_set.go:450] Sync "namespace-1552608918-20435/my-depl-656cffcbcc" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-656cffcbcc": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1552608918-20435/my-depl-656cffcbcc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 6a9d8d24-46b7-11e9-9adf-0242ac110002, UID in object meta: 
I0315 00:15:20.785] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:20.803] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:20.897] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:20.994] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:21.178] (Bdeployment.extensions/nginx created
W0315 00:15:21.279] I0315 00:15:20.948454   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:21.279] I0315 00:15:20.948735   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:21.279] I0315 00:15:21.185750   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552608918-20435", Name:"nginx", UID:"6bb8d7fd-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0315 00:15:21.280] I0315 00:15:21.190149   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608918-20435", Name:"nginx-776cc67f78", UID:"6bb9d056-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-w2d5j
W0315 00:15:21.280] I0315 00:15:21.194541   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608918-20435", Name:"nginx-776cc67f78", UID:"6bb9d056-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-ngbp9
W0315 00:15:21.280] I0315 00:15:21.194860   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608918-20435", Name:"nginx-776cc67f78", UID:"6bb9d056-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-2f7jk
I0315 00:15:21.381] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0315 00:15:25.567] (BSuccessful
I0315 00:15:25.568] message:Error from server (Conflict): error when applying patch:
I0315 00:15:25.569] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552608918-20435\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0315 00:15:25.569] to:
I0315 00:15:25.569] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0315 00:15:25.569] Name: "nginx", Namespace: "namespace-1552608918-20435"
I0315 00:15:25.571] Object: &{map["status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2019-03-15T00:15:21Z" "lastTransitionTime":"2019-03-15T00:15:21Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]]] "kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["uid":"6bb8d7fd-46b7-11e9-9adf-0242ac110002" "generation":'\x01' "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552608918-20435\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "name":"nginx" "namespace":"namespace-1552608918-20435" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552608918-20435/deployments/nginx" "resourceVersion":"609" "creationTimestamp":"2019-03-15T00:15:21Z" "managedFields":[map["manager":"kube-controller-manager" "operation":"Update" "apiVersion":"apps/v1" "time":"2019-03-15T00:15:21Z" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map["f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[] ".":map[] "f:lastTransitionTime":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]]] map["manager":"kubectl" "operation":"Update" "apiVersion":"extensions/v1beta1" "time":"2019-03-15T00:15:21Z" "fields":map["f:spec":map["f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[] "f:containers":map["k:{\"name\":\"nginx\"}":map["f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[] ".":map[] "f:image":map[] "f:imagePullPolicy":map[]]]]] "f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]]] "f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]]]]]] "spec":map["revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["schedulerName":"default-scheduler" "containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[]]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']]]]}
I0315 00:15:25.572] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0315 00:15:25.572] has:Error from server (Conflict)
W0315 00:15:25.672] I0315 00:15:21.949074   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:25.673] I0315 00:15:21.949290   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:25.673] I0315 00:15:22.949709   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:25.674] I0315 00:15:22.949953   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:25.674] I0315 00:15:23.950337   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:25.674] I0315 00:15:23.950632   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
... skipping 203 lines ...
I0315 00:15:38.374] +++ [0315 00:15:38] Creating namespace namespace-1552608938-20620
I0315 00:15:38.449] namespace/namespace-1552608938-20620 created
I0315 00:15:38.520] Context "test" modified.
I0315 00:15:38.528] +++ [0315 00:15:38] Testing kubectl get
I0315 00:15:38.620] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:38.710] (BSuccessful
I0315 00:15:38.711] message:Error from server (NotFound): pods "abc" not found
I0315 00:15:38.711] has:pods "abc" not found
I0315 00:15:38.803] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:38.894] (BSuccessful
I0315 00:15:38.894] message:Error from server (NotFound): pods "abc" not found
I0315 00:15:38.894] has:pods "abc" not found
I0315 00:15:38.990] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:39.076] (BSuccessful
I0315 00:15:39.076] message:{
I0315 00:15:39.076]     "apiVersion": "v1",
I0315 00:15:39.077]     "items": [],
... skipping 23 lines ...
I0315 00:15:39.420] has not:No resources found
I0315 00:15:39.507] Successful
I0315 00:15:39.508] message:NAME
I0315 00:15:39.508] has not:No resources found
I0315 00:15:39.601] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:39.706] (BSuccessful
I0315 00:15:39.706] message:error: the server doesn't have a resource type "foobar"
I0315 00:15:39.706] has not:No resources found
I0315 00:15:39.796] Successful
I0315 00:15:39.797] message:No resources found.
I0315 00:15:39.797] has:No resources found
I0315 00:15:39.884] Successful
I0315 00:15:39.884] message:
I0315 00:15:39.884] has not:No resources found
I0315 00:15:39.968] Successful
I0315 00:15:39.968] message:No resources found.
I0315 00:15:39.968] has:No resources found
I0315 00:15:40.057] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:40.147] (BSuccessful
I0315 00:15:40.149] message:Error from server (NotFound): pods "abc" not found
I0315 00:15:40.150] has:pods "abc" not found
I0315 00:15:40.150] FAIL!
I0315 00:15:40.150] message:Error from server (NotFound): pods "abc" not found
I0315 00:15:40.150] has not:List
I0315 00:15:40.151] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
W0315 00:15:40.251] I0315 00:15:38.960109   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:40.251] I0315 00:15:38.960331   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:40.251] I0315 00:15:39.960570   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:40.252] I0315 00:15:39.960745   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
... skipping 717 lines ...
I0315 00:15:43.824] }
I0315 00:15:43.919] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 00:15:44.178] (B<no value>Successful
I0315 00:15:44.179] message:valid-pod:
I0315 00:15:44.179] has:valid-pod:
I0315 00:15:44.266] Successful
I0315 00:15:44.266] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0315 00:15:44.267] 	template was:
I0315 00:15:44.267] 		{.missing}
I0315 00:15:44.267] 	object given to jsonpath engine was:
I0315 00:15:44.268] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"namespace-1552608943-6746", "selfLink":"/api/v1/namespaces/namespace-1552608943-6746/pods/valid-pod", "uid":"79295687-46b7-11e9-9adf-0242ac110002", "resourceVersion":"706", "creationTimestamp":"2019-03-15T00:15:43Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "time":"2019-03-15T00:15:43Z", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}, "f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{"f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{"f:memory":map[string]interface {}{}, ".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}, ".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update"}}, "name":"valid-pod"}, "spec":map[string]interface {}{"terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"memory":"512Mi", "cpu":"1"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname"}}, "restartPolicy":"Always"}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0315 00:15:44.268] has:missing is not found
I0315 00:15:44.356] Successful
I0315 00:15:44.357] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0315 00:15:44.357] 	template was:
I0315 00:15:44.357] 		{{.missing}}
I0315 00:15:44.357] 	raw data was:
I0315 00:15:44.358] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T00:15:43Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T00:15:43Z"}],"name":"valid-pod","namespace":"namespace-1552608943-6746","resourceVersion":"706","selfLink":"/api/v1/namespaces/namespace-1552608943-6746/pods/valid-pod","uid":"79295687-46b7-11e9-9adf-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0315 00:15:44.358] 	object given to template engine was:
I0315 00:15:44.359] 		map[apiVersion:v1 kind:Pod metadata:map[namespace:namespace-1552608943-6746 resourceVersion:706 selfLink:/api/v1/namespaces/namespace-1552608943-6746/pods/valid-pod uid:79295687-46b7-11e9-9adf-0242ac110002 creationTimestamp:2019-03-15T00:15:43Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[] f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[f:imagePullPolicy:map[] f:name:map[] f:resources:map[f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]] .:map[]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[] .:map[] f:image:map[]]] f:dnsPolicy:map[]]] manager:kubectl operation:Update time:2019-03-15T00:15:43Z]] name:valid-pod] spec:map[schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always] status:map[phase:Pending qosClass:Guaranteed]]
I0315 00:15:44.359] has:map has no entry for key "missing"
W0315 00:15:44.459] I0315 00:15:43.962992   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:44.460] I0315 00:15:43.963281   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:44.460] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0315 00:15:44.964] I0315 00:15:44.963606   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:44.964] I0315 00:15:44.963885   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:15:45.443] E0315 00:15:45.442578   70472 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0315 00:15:45.544] Successful
I0315 00:15:45.544] message:NAME        READY   STATUS    RESTARTS   AGE
I0315 00:15:45.544] valid-pod   0/1     Pending   0          1s
... skipping 158 lines ...
I0315 00:15:47.743]   terminationGracePeriodSeconds: 30
I0315 00:15:47.743] status:
I0315 00:15:47.744]   phase: Pending
I0315 00:15:47.744]   qosClass: Guaranteed
I0315 00:15:47.744] has:name: valid-pod
I0315 00:15:47.744] Successful
I0315 00:15:47.744] message:Error from server (NotFound): pods "invalid-pod" not found
I0315 00:15:47.744] has:"invalid-pod" not found
I0315 00:15:47.811] pod "valid-pod" deleted
I0315 00:15:47.908] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:15:48.070] (Bpod/redis-master created
I0315 00:15:48.074] pod/valid-pod created
I0315 00:15:48.174] Successful
... skipping 291 lines ...
I0315 00:15:53.891] Running command: run_create_secret_tests
I0315 00:15:53.915] 
I0315 00:15:53.917] +++ Running case: test-cmd.run_create_secret_tests 
I0315 00:15:53.920] +++ working dir: /go/src/k8s.io/kubernetes
I0315 00:15:53.923] +++ command: run_create_secret_tests
I0315 00:15:54.022] Successful
I0315 00:15:54.023] message:Error from server (NotFound): secrets "mysecret" not found
I0315 00:15:54.023] has:secrets "mysecret" not found
W0315 00:15:54.123] I0315 00:15:53.968857   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:15:54.124] I0315 00:15:53.969169   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:15:54.224] Successful
I0315 00:15:54.225] message:Error from server (NotFound): secrets "mysecret" not found
I0315 00:15:54.225] has:secrets "mysecret" not found
I0315 00:15:54.225] Successful
I0315 00:15:54.225] message:user-specified
I0315 00:15:54.225] has:user-specified
I0315 00:15:54.269] Successful
I0315 00:15:54.347] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"7f7dc7f2-46b7-11e9-9adf-0242ac110002","resourceVersion":"813","creationTimestamp":"2019-03-15T00:15:54Z"}}
... skipping 178 lines ...
I0315 00:15:58.401] has:Timeout exceeded while reading body
I0315 00:15:58.486] Successful
I0315 00:15:58.487] message:NAME        READY   STATUS    RESTARTS   AGE
I0315 00:15:58.487] valid-pod   0/1     Pending   0          1s
I0315 00:15:58.487] has:valid-pod
I0315 00:15:58.562] Successful
I0315 00:15:58.563] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0315 00:15:58.563] has:Invalid timeout value
I0315 00:15:58.648] pod "valid-pod" deleted
I0315 00:15:58.671] +++ exit code: 0
I0315 00:15:58.721] Recording: run_crd_tests
I0315 00:15:58.721] Running command: run_crd_tests
I0315 00:15:58.746] 
... skipping 243 lines ...
I0315 00:16:03.394] foo.company.com/test patched
I0315 00:16:03.493] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0315 00:16:03.583] (Bfoo.company.com/test patched
I0315 00:16:03.681] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0315 00:16:03.769] (Bfoo.company.com/test patched
I0315 00:16:03.868] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0315 00:16:04.035] (B+++ [0315 00:16:04] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0315 00:16:04.102] {
I0315 00:16:04.103]     "apiVersion": "company.com/v1",
I0315 00:16:04.103]     "kind": "Foo",
I0315 00:16:04.103]     "metadata": {
I0315 00:16:04.103]         "annotations": {
I0315 00:16:04.103]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 279 lines ...
W0315 00:16:19.986] I0315 00:16:19.985190   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:19.986] I0315 00:16:19.985414   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:20.986] I0315 00:16:20.985756   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:20.987] I0315 00:16:20.985974   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:21.987] I0315 00:16:21.986386   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:21.987] I0315 00:16:21.986681   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:22.346] E0315 00:16:22.344849   59005 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos"]
W0315 00:16:22.976] I0315 00:16:22.975692   59005 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W0315 00:16:22.977] I0315 00:16:22.976906   55707 clientconn.go:551] parsed scheme: ""
W0315 00:16:22.977] I0315 00:16:22.976946   55707 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 00:16:22.977] I0315 00:16:22.976990   55707 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 00:16:22.978] I0315 00:16:22.977050   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:16:22.978] I0315 00:16:22.977896   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 88 lines ...
W0315 00:16:33.994] I0315 00:16:33.994040   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:33.995] I0315 00:16:33.994335   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:16:34.955] crd.sh:459: Successful get bars {{len .items}}: 0
I0315 00:16:35.122] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0315 00:16:35.223] I0315 00:16:34.994635   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:35.223] I0315 00:16:34.994834   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:35.223] Error from server (NotFound): namespaces "non-native-resources" not found
I0315 00:16:35.324] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0315 00:16:35.339] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0315 00:16:35.464] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0315 00:16:35.503] +++ exit code: 0
I0315 00:16:35.591] Recording: run_cmd_with_img_tests
I0315 00:16:35.591] Running command: run_cmd_with_img_tests
... skipping 7 lines ...
I0315 00:16:35.789] +++ [0315 00:16:35] Testing cmd with image
I0315 00:16:35.888] Successful
I0315 00:16:35.889] message:deployment.apps/test1 created
I0315 00:16:35.889] has:deployment.apps/test1 created
I0315 00:16:35.968] deployment.extensions "test1" deleted
I0315 00:16:36.051] Successful
I0315 00:16:36.051] message:error: Invalid image name "InvalidImageName": invalid reference format
I0315 00:16:36.051] has:error: Invalid image name "InvalidImageName": invalid reference format
I0315 00:16:36.068] +++ exit code: 0
I0315 00:16:36.120] +++ [0315 00:16:36] Testing recursive resources
I0315 00:16:36.127] +++ [0315 00:16:36] Creating namespace namespace-1552608996-18381
I0315 00:16:36.205] namespace/namespace-1552608996-18381 created
I0315 00:16:36.281] Context "test" modified.
I0315 00:16:36.379] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:16:36.663] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:36.666] (BSuccessful
I0315 00:16:36.666] message:pod/busybox0 created
I0315 00:16:36.666] pod/busybox1 created
I0315 00:16:36.666] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0315 00:16:36.666] has:error validating data: kind not set
I0315 00:16:36.763] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:36.955] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0315 00:16:36.957] (BSuccessful
I0315 00:16:36.958] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:36.958] has:Object 'Kind' is missing
I0315 00:16:37.057] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:37.354] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0315 00:16:37.356] (BSuccessful
I0315 00:16:37.357] message:pod/busybox0 replaced
I0315 00:16:37.357] pod/busybox1 replaced
I0315 00:16:37.357] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0315 00:16:37.357] has:error validating data: kind not set
I0315 00:16:37.455] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:37.558] (BSuccessful
I0315 00:16:37.558] message:Name:               busybox0
I0315 00:16:37.558] Namespace:          namespace-1552608996-18381
I0315 00:16:37.558] Priority:           0
I0315 00:16:37.559] PriorityClassName:  <none>
... skipping 159 lines ...
I0315 00:16:37.576] has:Object 'Kind' is missing
I0315 00:16:37.661] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:37.853] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0315 00:16:37.855] (BSuccessful
I0315 00:16:37.855] message:pod/busybox0 annotated
I0315 00:16:37.855] pod/busybox1 annotated
I0315 00:16:37.856] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:37.856] has:Object 'Kind' is missing
W0315 00:16:37.956] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0315 00:16:37.957] I0315 00:16:35.878710   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552608995-15558", Name:"test1", UID:"983e3620-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"971", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-848d5d4b47 to 1
W0315 00:16:37.957] I0315 00:16:35.884342   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608995-15558", Name:"test1-848d5d4b47", UID:"983f1fe5-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"972", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-r46s6
W0315 00:16:37.957] I0315 00:16:35.995174   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:37.958] I0315 00:16:35.995386   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
... skipping 5 lines ...
I0315 00:16:38.263] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0315 00:16:38.265] (BSuccessful
I0315 00:16:38.266] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0315 00:16:38.266] pod/busybox0 configured
I0315 00:16:38.266] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0315 00:16:38.266] pod/busybox1 configured
I0315 00:16:38.267] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0315 00:16:38.267] has:error validating data: kind not set
I0315 00:16:38.360] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:16:38.530] (Bdeployment.apps/nginx created
W0315 00:16:38.630] I0315 00:16:38.535867   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552608996-18381", Name:"nginx", UID:"99d3b6e9-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"998", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5f7cff5b56 to 3
W0315 00:16:38.631] I0315 00:16:38.541092   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608996-18381", Name:"nginx-5f7cff5b56", UID:"99d497de-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"999", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-l7csr
W0315 00:16:38.631] I0315 00:16:38.546227   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608996-18381", Name:"nginx-5f7cff5b56", UID:"99d497de-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"999", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-bqg9k
W0315 00:16:38.632] I0315 00:16:38.546778   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608996-18381", Name:"nginx-5f7cff5b56", UID:"99d497de-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"999", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-mjpw5
... skipping 50 lines ...
W0315 00:16:39.106] I0315 00:16:38.997072   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:16:39.206] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:39.294] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:39.296] (BSuccessful
I0315 00:16:39.297] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0315 00:16:39.297] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0315 00:16:39.297] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:39.297] has:Object 'Kind' is missing
I0315 00:16:39.394] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:39.487] (BSuccessful
I0315 00:16:39.487] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:39.488] has:busybox0:busybox1:
I0315 00:16:39.490] Successful
I0315 00:16:39.490] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:39.490] has:Object 'Kind' is missing
I0315 00:16:39.585] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:39.689] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:39.786] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0315 00:16:39.789] (BSuccessful
I0315 00:16:39.789] message:pod/busybox0 labeled
I0315 00:16:39.789] pod/busybox1 labeled
I0315 00:16:39.790] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:39.790] has:Object 'Kind' is missing
I0315 00:16:39.887] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:39.985] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:40.085] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0315 00:16:40.087] (BSuccessful
I0315 00:16:40.088] message:pod/busybox0 patched
I0315 00:16:40.088] pod/busybox1 patched
I0315 00:16:40.088] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:40.088] has:Object 'Kind' is missing
I0315 00:16:40.185] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:40.381] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:16:40.383] (BSuccessful
I0315 00:16:40.383] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0315 00:16:40.383] pod "busybox0" force deleted
I0315 00:16:40.383] pod "busybox1" force deleted
I0315 00:16:40.384] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 00:16:40.384] has:Object 'Kind' is missing
I0315 00:16:40.476] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:16:40.652] (Breplicationcontroller/busybox0 created
I0315 00:16:40.658] replicationcontroller/busybox1 created
W0315 00:16:40.759] I0315 00:16:39.854030   59005 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0315 00:16:40.759] I0315 00:16:39.997352   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:40.759] I0315 00:16:39.997705   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:40.760] I0315 00:16:40.659364   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552608996-18381", Name:"busybox0", UID:"9b1745de-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-8974n
W0315 00:16:40.760] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0315 00:16:40.760] I0315 00:16:40.663420   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552608996-18381", Name:"busybox1", UID:"9b189175-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-dnkp4
I0315 00:16:40.861] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:40.888] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:40.998] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0315 00:16:41.100] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0315 00:16:41.306] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0315 00:16:41.409] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0315 00:16:41.412] (BSuccessful
I0315 00:16:41.412] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0315 00:16:41.412] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0315 00:16:41.413] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:41.413] has:Object 'Kind' is missing
I0315 00:16:41.499] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0315 00:16:41.594] horizontalpodautoscaler.autoscaling "busybox1" deleted
W0315 00:16:41.695] I0315 00:16:40.997971   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:41.695] I0315 00:16:40.998196   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:16:41.796] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:41.819] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0315 00:16:41.922] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0315 00:16:42.132] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0315 00:16:42.236] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0315 00:16:42.239] (BSuccessful
I0315 00:16:42.239] message:service/busybox0 exposed
I0315 00:16:42.239] service/busybox1 exposed
I0315 00:16:42.240] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:42.240] has:Object 'Kind' is missing
I0315 00:16:42.337] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:42.432] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0315 00:16:42.526] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0315 00:16:42.727] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0315 00:16:42.820] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0315 00:16:42.823] (BSuccessful
I0315 00:16:42.823] message:replicationcontroller/busybox0 scaled
I0315 00:16:42.823] replicationcontroller/busybox1 scaled
I0315 00:16:42.823] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:42.824] has:Object 'Kind' is missing
I0315 00:16:42.922] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:43.119] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:16:43.121] (BSuccessful
I0315 00:16:43.122] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0315 00:16:43.122] replicationcontroller "busybox0" force deleted
I0315 00:16:43.122] replicationcontroller "busybox1" force deleted
I0315 00:16:43.122] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:43.123] has:Object 'Kind' is missing
I0315 00:16:43.214] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:16:43.381] (Bdeployment.apps/nginx1-deployment created
I0315 00:16:43.386] deployment.apps/nginx0-deployment created
W0315 00:16:43.487] I0315 00:16:41.998472   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:43.487] I0315 00:16:41.998703   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:43.488] I0315 00:16:42.625007   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552608996-18381", Name:"busybox0", UID:"9b1745de-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1050", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qzznp
W0315 00:16:43.488] I0315 00:16:42.635650   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552608996-18381", Name:"busybox1", UID:"9b189175-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1053", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-vpc2d
W0315 00:16:43.488] I0315 00:16:42.999096   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:43.488] I0315 00:16:42.999357   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:43.489] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0315 00:16:43.489] I0315 00:16:43.387133   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552608996-18381", Name:"nginx1-deployment", UID:"9cb7e96f-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7c76c6cbb8 to 2
W0315 00:16:43.489] I0315 00:16:43.392171   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608996-18381", Name:"nginx1-deployment-7c76c6cbb8", UID:"9cb8ddc5-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-xdqjg
W0315 00:16:43.489] I0315 00:16:43.393488   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552608996-18381", Name:"nginx0-deployment", UID:"9cb8de09-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-7bb85585d7 to 2
W0315 00:16:43.490] I0315 00:16:43.398402   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608996-18381", Name:"nginx1-deployment-7c76c6cbb8", UID:"9cb8ddc5-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-dtl55
W0315 00:16:43.490] I0315 00:16:43.400976   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608996-18381", Name:"nginx0-deployment-7bb85585d7", UID:"9cb9bcfe-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-7f4vc
W0315 00:16:43.490] I0315 00:16:43.406554   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552608996-18381", Name:"nginx0-deployment-7bb85585d7", UID:"9cb9bcfe-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-9rqkl
I0315 00:16:43.591] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0315 00:16:43.600] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0315 00:16:43.819] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0315 00:16:43.821] (BSuccessful
I0315 00:16:43.822] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0315 00:16:43.822] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0315 00:16:43.822] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 00:16:43.822] has:Object 'Kind' is missing
I0315 00:16:43.924] deployment.apps/nginx1-deployment paused
I0315 00:16:43.932] deployment.apps/nginx0-deployment paused
W0315 00:16:44.033] I0315 00:16:43.999689   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:44.033] I0315 00:16:43.999871   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:16:44.134] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
... skipping 12 lines ...
I0315 00:16:44.394] 1         <none>
I0315 00:16:44.394] 
I0315 00:16:44.394] deployment.apps/nginx0-deployment 
I0315 00:16:44.394] REVISION  CHANGE-CAUSE
I0315 00:16:44.394] 1         <none>
I0315 00:16:44.394] 
I0315 00:16:44.395] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 00:16:44.395] has:nginx0-deployment
I0315 00:16:44.396] Successful
I0315 00:16:44.396] message:deployment.apps/nginx1-deployment 
I0315 00:16:44.396] REVISION  CHANGE-CAUSE
I0315 00:16:44.396] 1         <none>
I0315 00:16:44.396] 
I0315 00:16:44.396] deployment.apps/nginx0-deployment 
I0315 00:16:44.397] REVISION  CHANGE-CAUSE
I0315 00:16:44.397] 1         <none>
I0315 00:16:44.397] 
I0315 00:16:44.397] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 00:16:44.397] has:nginx1-deployment
I0315 00:16:44.399] Successful
I0315 00:16:44.399] message:deployment.apps/nginx1-deployment 
I0315 00:16:44.399] REVISION  CHANGE-CAUSE
I0315 00:16:44.399] 1         <none>
I0315 00:16:44.399] 
I0315 00:16:44.400] deployment.apps/nginx0-deployment 
I0315 00:16:44.400] REVISION  CHANGE-CAUSE
I0315 00:16:44.400] 1         <none>
I0315 00:16:44.400] 
I0315 00:16:44.400] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 00:16:44.400] has:Object 'Kind' is missing
I0315 00:16:44.480] deployment.apps "nginx1-deployment" force deleted
I0315 00:16:44.487] deployment.apps "nginx0-deployment" force deleted
W0315 00:16:44.588] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 00:16:44.588] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0315 00:16:45.001] I0315 00:16:45.000321   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:45.001] I0315 00:16:45.000602   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:16:45.588] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:16:45.757] (Breplicationcontroller/busybox0 created
I0315 00:16:45.768] replicationcontroller/busybox1 created
W0315 00:16:45.869] I0315 00:16:45.764436   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552608996-18381", Name:"busybox0", UID:"9e229c64-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1119", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-46g4p
W0315 00:16:45.869] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0315 00:16:45.869] I0315 00:16:45.774626   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552608996-18381", Name:"busybox1", UID:"9e23a755-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1121", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-wh6sk
I0315 00:16:45.970] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 00:16:45.972] (BSuccessful
I0315 00:16:45.972] message:no rollbacker has been implemented for "ReplicationController"
I0315 00:16:45.973] no rollbacker has been implemented for "ReplicationController"
I0315 00:16:45.973] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0315 00:16:45.975] message:no rollbacker has been implemented for "ReplicationController"
I0315 00:16:45.975] no rollbacker has been implemented for "ReplicationController"
I0315 00:16:45.975] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:45.975] has:Object 'Kind' is missing
I0315 00:16:46.070] Successful
I0315 00:16:46.071] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:46.071] error: replicationcontrollers "busybox0" pausing is not supported
I0315 00:16:46.071] error: replicationcontrollers "busybox1" pausing is not supported
I0315 00:16:46.071] has:Object 'Kind' is missing
I0315 00:16:46.072] Successful
I0315 00:16:46.073] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:46.073] error: replicationcontrollers "busybox0" pausing is not supported
I0315 00:16:46.073] error: replicationcontrollers "busybox1" pausing is not supported
I0315 00:16:46.073] has:replicationcontrollers "busybox0" pausing is not supported
I0315 00:16:46.075] Successful
I0315 00:16:46.075] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:46.075] error: replicationcontrollers "busybox0" pausing is not supported
I0315 00:16:46.075] error: replicationcontrollers "busybox1" pausing is not supported
I0315 00:16:46.075] has:replicationcontrollers "busybox1" pausing is not supported
I0315 00:16:46.172] Successful
I0315 00:16:46.172] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:46.172] error: replicationcontrollers "busybox0" resuming is not supported
I0315 00:16:46.173] error: replicationcontrollers "busybox1" resuming is not supported
I0315 00:16:46.173] has:Object 'Kind' is missing
I0315 00:16:46.174] Successful
I0315 00:16:46.174] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:46.175] error: replicationcontrollers "busybox0" resuming is not supported
I0315 00:16:46.175] error: replicationcontrollers "busybox1" resuming is not supported
I0315 00:16:46.175] has:replicationcontrollers "busybox0" resuming is not supported
I0315 00:16:46.176] Successful
I0315 00:16:46.176] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 00:16:46.177] error: replicationcontrollers "busybox0" resuming is not supported
I0315 00:16:46.177] error: replicationcontrollers "busybox1" resuming is not supported
I0315 00:16:46.177] has:replicationcontrollers "busybox0" resuming is not supported
I0315 00:16:46.257] replicationcontroller "busybox0" force deleted
I0315 00:16:46.262] replicationcontroller "busybox1" force deleted
W0315 00:16:46.363] I0315 00:16:46.000903   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:46.363] I0315 00:16:46.001176   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:46.364] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 00:16:46.364] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0315 00:16:47.002] I0315 00:16:47.001629   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:47.002] I0315 00:16:47.001879   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:16:47.272] Recording: run_namespace_tests
I0315 00:16:47.272] Running command: run_namespace_tests
I0315 00:16:47.296] 
I0315 00:16:47.298] +++ Running case: test-cmd.run_namespace_tests 
... skipping 10 lines ...
W0315 00:16:50.004] I0315 00:16:50.003576   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:50.004] I0315 00:16:50.003855   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:51.005] I0315 00:16:51.004260   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:51.005] I0315 00:16:51.004545   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:52.005] I0315 00:16:52.004867   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:16:52.006] I0315 00:16:52.005090   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:16:52.548] E0315 00:16:52.547591   59005 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0315 00:16:52.704] namespace/my-namespace condition met
I0315 00:16:52.796] Successful
I0315 00:16:52.796] message:Error from server (NotFound): namespaces "my-namespace" not found
I0315 00:16:52.796] has: not found
I0315 00:16:52.911] core.sh:1336: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0315 00:16:52.990] (Bnamespace/other created
I0315 00:16:53.094] core.sh:1340: Successful get namespaces/other {{.metadata.name}}: other
I0315 00:16:53.189] (Bcore.sh:1344: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:16:53.361] (Bpod/valid-pod created
I0315 00:16:53.463] core.sh:1348: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 00:16:53.560] (Bcore.sh:1350: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 00:16:53.643] (BSuccessful
I0315 00:16:53.644] message:error: a resource cannot be retrieved by name across all namespaces
I0315 00:16:53.644] has:a resource cannot be retrieved by name across all namespaces
I0315 00:16:53.737] core.sh:1357: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 00:16:53.816] (Bpod "valid-pod" force deleted
I0315 00:16:53.915] core.sh:1361: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 00:16:53.993] (Bnamespace "other" deleted
W0315 00:16:54.093] I0315 00:16:53.005404   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
... skipping 161 lines ...
I0315 00:17:15.032] +++ command: run_client_config_tests
I0315 00:17:15.046] +++ [0315 00:17:15] Creating namespace namespace-1552609035-3233
I0315 00:17:15.121] namespace/namespace-1552609035-3233 created
I0315 00:17:15.196] Context "test" modified.
I0315 00:17:15.203] +++ [0315 00:17:15] Testing client config
I0315 00:17:15.277] Successful
I0315 00:17:15.277] message:error: stat missing: no such file or directory
I0315 00:17:15.277] has:missing: no such file or directory
I0315 00:17:15.352] Successful
I0315 00:17:15.352] message:error: stat missing: no such file or directory
I0315 00:17:15.353] has:missing: no such file or directory
I0315 00:17:15.424] Successful
I0315 00:17:15.424] message:error: stat missing: no such file or directory
I0315 00:17:15.424] has:missing: no such file or directory
I0315 00:17:15.500] Successful
I0315 00:17:15.500] message:Error in configuration: context was not found for specified context: missing-context
I0315 00:17:15.500] has:context was not found for specified context: missing-context
I0315 00:17:15.574] Successful
I0315 00:17:15.575] message:error: no server found for cluster "missing-cluster"
I0315 00:17:15.575] has:no server found for cluster "missing-cluster"
I0315 00:17:15.650] Successful
I0315 00:17:15.650] message:error: auth info "missing-user" does not exist
I0315 00:17:15.650] has:auth info "missing-user" does not exist
W0315 00:17:15.751] I0315 00:17:15.018878   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:15.751] I0315 00:17:15.019098   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:17:15.851] Successful
I0315 00:17:15.852] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0315 00:17:15.852] has:Error loading config file
I0315 00:17:15.870] Successful
I0315 00:17:15.870] message:error: stat missing-config: no such file or directory
I0315 00:17:15.870] has:no such file or directory
I0315 00:17:15.887] +++ exit code: 0
I0315 00:17:15.932] Recording: run_service_accounts_tests
I0315 00:17:15.932] Running command: run_service_accounts_tests
I0315 00:17:15.955] 
I0315 00:17:15.958] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 48 lines ...
I0315 00:17:22.828] Labels:                        run=pi
I0315 00:17:22.828] Annotations:                   <none>
I0315 00:17:22.828] Schedule:                      59 23 31 2 *
I0315 00:17:22.828] Concurrency Policy:            Allow
I0315 00:17:22.828] Suspend:                       False
I0315 00:17:22.828] Successful Job History Limit:  824642196184
I0315 00:17:22.828] Failed Job History Limit:      1
I0315 00:17:22.829] Starting Deadline Seconds:     <unset>
I0315 00:17:22.829] Selector:                      <unset>
I0315 00:17:22.829] Parallelism:                   <unset>
I0315 00:17:22.829] Completions:                   <unset>
I0315 00:17:22.829] Pod Template:
I0315 00:17:22.829]   Labels:  run=pi
... skipping 31 lines ...
I0315 00:17:23.388]                 job-name=test-job
I0315 00:17:23.388]                 run=pi
I0315 00:17:23.388] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0315 00:17:23.388] Parallelism:    1
I0315 00:17:23.388] Completions:    1
I0315 00:17:23.389] Start Time:     Fri, 15 Mar 2019 00:17:23 +0000
I0315 00:17:23.389] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0315 00:17:23.389] Pod Template:
I0315 00:17:23.389]   Labels:  controller-uid=b4654037-46b7-11e9-9adf-0242ac110002
I0315 00:17:23.389]            job-name=test-job
I0315 00:17:23.389]            run=pi
I0315 00:17:23.389]   Containers:
I0315 00:17:23.389]    pi:
... skipping 411 lines ...
I0315 00:17:33.390]   sessionAffinity: None
I0315 00:17:33.390]   type: ClusterIP
I0315 00:17:33.390] status:
I0315 00:17:33.390]   loadBalancer: {}
W0315 00:17:33.491] I0315 00:17:33.030247   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:33.491] I0315 00:17:33.030454   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:17:33.492] error: you must specify resources by --filename when --local is set.
W0315 00:17:33.492] Example resource specifications include:
W0315 00:17:33.492]    '-f rsrc.yaml'
W0315 00:17:33.492]    '--filename=rsrc.json'
I0315 00:17:33.592] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0315 00:17:33.737] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0315 00:17:33.822] (Bservice "redis-master" deleted
... skipping 124 lines ...
I0315 00:17:41.100] (Bdaemonset.extensions/bind rolled back
W0315 00:17:41.201] I0315 00:17:41.034705   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:41.201] I0315 00:17:41.034913   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:17:41.302] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0315 00:17:41.312] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 00:17:41.431] (BSuccessful
I0315 00:17:41.432] message:error: unable to find specified revision 1000000 in history
I0315 00:17:41.432] has:unable to find specified revision
I0315 00:17:41.534] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0315 00:17:41.635] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 00:17:41.746] (Bdaemonset.extensions/bind rolled back
I0315 00:17:41.874] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0315 00:17:41.973] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0315 00:17:43.404] Namespace:    namespace-1552609062-5099
I0315 00:17:43.404] Selector:     app=guestbook,tier=frontend
I0315 00:17:43.405] Labels:       app=guestbook
I0315 00:17:43.405]               tier=frontend
I0315 00:17:43.405] Annotations:  <none>
I0315 00:17:43.405] Replicas:     3 current / 3 desired
I0315 00:17:43.405] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:17:43.405] Pod Template:
I0315 00:17:43.405]   Labels:  app=guestbook
I0315 00:17:43.405]            tier=frontend
I0315 00:17:43.406]   Containers:
I0315 00:17:43.406]    php-redis:
I0315 00:17:43.406]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0315 00:17:43.524] Namespace:    namespace-1552609062-5099
I0315 00:17:43.524] Selector:     app=guestbook,tier=frontend
I0315 00:17:43.524] Labels:       app=guestbook
I0315 00:17:43.524]               tier=frontend
I0315 00:17:43.524] Annotations:  <none>
I0315 00:17:43.524] Replicas:     3 current / 3 desired
I0315 00:17:43.524] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:17:43.524] Pod Template:
I0315 00:17:43.524]   Labels:  app=guestbook
I0315 00:17:43.525]            tier=frontend
I0315 00:17:43.525]   Containers:
I0315 00:17:43.525]    php-redis:
I0315 00:17:43.525]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 28 lines ...
I0315 00:17:43.730] Namespace:    namespace-1552609062-5099
I0315 00:17:43.730] Selector:     app=guestbook,tier=frontend
I0315 00:17:43.730] Labels:       app=guestbook
I0315 00:17:43.730]               tier=frontend
I0315 00:17:43.731] Annotations:  <none>
I0315 00:17:43.731] Replicas:     3 current / 3 desired
I0315 00:17:43.731] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:17:43.731] Pod Template:
I0315 00:17:43.731]   Labels:  app=guestbook
I0315 00:17:43.731]            tier=frontend
I0315 00:17:43.731]   Containers:
I0315 00:17:43.731]    php-redis:
I0315 00:17:43.732]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0315 00:17:43.757] Namespace:    namespace-1552609062-5099
I0315 00:17:43.757] Selector:     app=guestbook,tier=frontend
I0315 00:17:43.758] Labels:       app=guestbook
I0315 00:17:43.758]               tier=frontend
I0315 00:17:43.758] Annotations:  <none>
I0315 00:17:43.758] Replicas:     3 current / 3 desired
I0315 00:17:43.758] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:17:43.758] Pod Template:
I0315 00:17:43.758]   Labels:  app=guestbook
I0315 00:17:43.759]            tier=frontend
I0315 00:17:43.759]   Containers:
I0315 00:17:43.759]    php-redis:
I0315 00:17:43.759]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0315 00:17:43.915] Namespace:    namespace-1552609062-5099
I0315 00:17:43.915] Selector:     app=guestbook,tier=frontend
I0315 00:17:43.915] Labels:       app=guestbook
I0315 00:17:43.915]               tier=frontend
I0315 00:17:43.915] Annotations:  <none>
I0315 00:17:43.915] Replicas:     3 current / 3 desired
I0315 00:17:43.916] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:17:43.916] Pod Template:
I0315 00:17:43.916]   Labels:  app=guestbook
I0315 00:17:43.916]            tier=frontend
I0315 00:17:43.916]   Containers:
I0315 00:17:43.916]    php-redis:
I0315 00:17:43.916]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0315 00:17:44.030] Namespace:    namespace-1552609062-5099
I0315 00:17:44.030] Selector:     app=guestbook,tier=frontend
I0315 00:17:44.030] Labels:       app=guestbook
I0315 00:17:44.030]               tier=frontend
I0315 00:17:44.031] Annotations:  <none>
I0315 00:17:44.031] Replicas:     3 current / 3 desired
I0315 00:17:44.031] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:17:44.031] Pod Template:
I0315 00:17:44.031]   Labels:  app=guestbook
I0315 00:17:44.031]            tier=frontend
I0315 00:17:44.031]   Containers:
I0315 00:17:44.031]    php-redis:
I0315 00:17:44.031]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0315 00:17:44.144] Namespace:    namespace-1552609062-5099
I0315 00:17:44.144] Selector:     app=guestbook,tier=frontend
I0315 00:17:44.145] Labels:       app=guestbook
I0315 00:17:44.145]               tier=frontend
I0315 00:17:44.145] Annotations:  <none>
I0315 00:17:44.145] Replicas:     3 current / 3 desired
I0315 00:17:44.145] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:17:44.145] Pod Template:
I0315 00:17:44.145]   Labels:  app=guestbook
I0315 00:17:44.145]            tier=frontend
I0315 00:17:44.145]   Containers:
I0315 00:17:44.145]    php-redis:
I0315 00:17:44.145]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0315 00:17:44.265] Namespace:    namespace-1552609062-5099
I0315 00:17:44.265] Selector:     app=guestbook,tier=frontend
I0315 00:17:44.265] Labels:       app=guestbook
I0315 00:17:44.265]               tier=frontend
I0315 00:17:44.265] Annotations:  <none>
I0315 00:17:44.265] Replicas:     3 current / 3 desired
I0315 00:17:44.265] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:17:44.265] Pod Template:
I0315 00:17:44.266]   Labels:  app=guestbook
I0315 00:17:44.266]            tier=frontend
I0315 00:17:44.266]   Containers:
I0315 00:17:44.266]    php-redis:
I0315 00:17:44.266]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 24 lines ...
I0315 00:17:45.318] (Breplicationcontroller/frontend scaled
I0315 00:17:45.421] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0315 00:17:45.502] (Breplicationcontroller "frontend" deleted
W0315 00:17:45.603] I0315 00:17:44.036249   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:45.603] I0315 00:17:44.036779   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:17:45.604] I0315 00:17:44.465700   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552609062-5099", Name:"frontend", UID:"c0571d0c-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1408", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-c786j
W0315 00:17:45.604] error: Expected replicas to be 3, was 2
W0315 00:17:45.604] I0315 00:17:45.035738   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552609062-5099", Name:"frontend", UID:"c0571d0c-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-sfqgs
W0315 00:17:45.604] I0315 00:17:45.037020   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:45.604] I0315 00:17:45.037163   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:17:45.605] I0315 00:17:45.325991   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552609062-5099", Name:"frontend", UID:"c0571d0c-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-sfqgs
W0315 00:17:45.692] I0315 00:17:45.691528   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552609062-5099", Name:"redis-master", UID:"c1d9dd06-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"1430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-p4bvp
I0315 00:17:45.792] replicationcontroller/redis-master created
... skipping 42 lines ...
I0315 00:17:47.610] service "expose-test-deployment" deleted
I0315 00:17:47.718] Successful
I0315 00:17:47.719] message:service/expose-test-deployment exposed
I0315 00:17:47.719] has:service/expose-test-deployment exposed
I0315 00:17:47.805] service "expose-test-deployment" deleted
I0315 00:17:47.901] Successful
I0315 00:17:47.901] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0315 00:17:47.901] See 'kubectl expose -h' for help and examples
I0315 00:17:47.901] has:invalid deployment: no selectors
I0315 00:17:47.989] Successful
I0315 00:17:47.990] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0315 00:17:47.990] See 'kubectl expose -h' for help and examples
I0315 00:17:47.990] has:invalid deployment: no selectors
W0315 00:17:48.090] I0315 00:17:48.038360   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:48.091] I0315 00:17:48.038577   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:17:48.162] I0315 00:17:48.161200   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment", UID:"c353a54d-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1535", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64bb598779 to 3
W0315 00:17:48.167] I0315 00:17:48.166959   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-64bb598779", UID:"c354874f-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1536", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-mqjtn
... skipping 29 lines ...
I0315 00:17:50.223] service "frontend-3" deleted
I0315 00:17:50.231] service "frontend-4" deleted
I0315 00:17:50.240] service "frontend-5" deleted
W0315 00:17:50.340] I0315 00:17:50.039356   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:50.341] I0315 00:17:50.039587   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:17:50.441] Successful
I0315 00:17:50.442] message:error: cannot expose a Node
I0315 00:17:50.442] has:cannot expose
I0315 00:17:50.450] Successful
I0315 00:17:50.451] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0315 00:17:50.451] has:metadata.name: Invalid value
I0315 00:17:50.561] Successful
I0315 00:17:50.561] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 36 lines ...
I0315 00:17:53.159] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0315 00:17:53.299] core.sh:1259: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0315 00:17:53.396] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0315 00:17:53.492] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0315 00:17:53.593] core.sh:1263: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0315 00:17:53.674] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0315 00:17:53.775] Error: required flag(s) "max" not set
W0315 00:17:53.775] 
W0315 00:17:53.775] 
W0315 00:17:53.775] Examples:
W0315 00:17:53.776]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0315 00:17:53.776]   kubectl autoscale deployment foo --min=2 --max=10
W0315 00:17:53.776]   
... skipping 57 lines ...
I0315 00:17:54.016]           requests:
I0315 00:17:54.017]             cpu: 300m
I0315 00:17:54.017]       terminationGracePeriodSeconds: 0
I0315 00:17:54.017] status: {}
W0315 00:17:54.117] I0315 00:17:54.042512   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:54.118] I0315 00:17:54.042723   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:17:54.118] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0315 00:17:54.285] deployment.apps/nginx-deployment-resources created
W0315 00:17:54.386] I0315 00:17:54.291795   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources", UID:"c6faf8a8-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-695c766d58 to 3
W0315 00:17:54.386] I0315 00:17:54.296664   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources-695c766d58", UID:"c6fbfaae-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1678", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-wmdj7
W0315 00:17:54.387] I0315 00:17:54.301157   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources-695c766d58", UID:"c6fbfaae-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1678", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-9d5fc
W0315 00:17:54.387] I0315 00:17:54.301565   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources-695c766d58", UID:"c6fbfaae-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1678", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-l2cmn
I0315 00:17:54.487] core.sh:1278: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0315 00:17:54.689] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0315 00:17:54.793] core.sh:1283: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0315 00:17:54.890] (Bcore.sh:1284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0315 00:17:55.081] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0315 00:17:55.182] I0315 00:17:54.696023   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources", UID:"c6faf8a8-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5b7fc6dd8b to 1
W0315 00:17:55.182] I0315 00:17:54.701055   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"c7399503-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5b7fc6dd8b-bbzd9
W0315 00:17:55.183] error: unable to find container named redis
W0315 00:17:55.183] I0315 00:17:55.043021   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:55.183] I0315 00:17:55.043295   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:17:55.183] I0315 00:17:55.109722   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources", UID:"c6faf8a8-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5b7fc6dd8b to 0
W0315 00:17:55.183] I0315 00:17:55.117007   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"c7399503-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1704", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5b7fc6dd8b-bbzd9
W0315 00:17:55.184] I0315 00:17:55.136355   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources", UID:"c6faf8a8-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1703", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6bc4567bf6 to 1
W0315 00:17:55.184] I0315 00:17:55.148616   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609062-5099", Name:"nginx-deployment-resources-6bc4567bf6", UID:"c77598b0-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1711", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6bc4567bf6-nf6b5
... skipping 224 lines ...
I0315 00:17:55.848]     status: "True"
I0315 00:17:55.848]     type: Progressing
I0315 00:17:55.848]   observedGeneration: 4
I0315 00:17:55.848]   replicas: 4
I0315 00:17:55.848]   unavailableReplicas: 4
I0315 00:17:55.848]   updatedReplicas: 1
W0315 00:17:55.949] error: you must specify resources by --filename when --local is set.
W0315 00:17:55.949] Example resource specifications include:
W0315 00:17:55.949]    '-f rsrc.yaml'
W0315 00:17:55.950]    '--filename=rsrc.json'
W0315 00:17:56.044] I0315 00:17:56.043533   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:17:56.044] I0315 00:17:56.043829   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:17:56.145] core.sh:1299: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 48 lines ...
I0315 00:17:57.752]                 pod-template-hash=7875bf5c8b
I0315 00:17:57.752] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0315 00:17:57.752]                 deployment.kubernetes.io/max-replicas: 2
I0315 00:17:57.752]                 deployment.kubernetes.io/revision: 1
I0315 00:17:57.752] Controlled By:  Deployment/test-nginx-apps
I0315 00:17:57.752] Replicas:       1 current / 1 desired
I0315 00:17:57.752] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 00:17:57.752] Pod Template:
I0315 00:17:57.752]   Labels:  app=test-nginx-apps
I0315 00:17:57.753]            pod-template-hash=7875bf5c8b
I0315 00:17:57.753]   Containers:
I0315 00:17:57.753]    nginx:
I0315 00:17:57.753]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 103 lines ...
W0315 00:18:02.355] I0315 00:18:02.047314   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:18:03.048] I0315 00:18:03.047615   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:03.048] I0315 00:18:03.047813   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:18:03.351] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 00:18:03.622] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 00:18:03.823] (Bdeployment.extensions/nginx rolled back
W0315 00:18:03.925] error: unable to find specified revision 1000000 in history
W0315 00:18:04.049] I0315 00:18:04.048141   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:04.050] I0315 00:18:04.048724   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:18:04.991] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0315 00:18:05.192] (Bdeployment.extensions/nginx paused
W0315 00:18:05.293] I0315 00:18:05.049081   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:05.294] I0315 00:18:05.049583   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:18:05.474] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0315 00:18:05.729] deployment.extensions/nginx resumed
W0315 00:18:06.051] I0315 00:18:06.049952   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:06.051] I0315 00:18:06.050208   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:18:06.159] deployment.extensions/nginx rolled back
I0315 00:18:06.551]     deployment.kubernetes.io/revision-history: 1,3
W0315 00:18:06.848] error: desired revision (3) is different from the running revision (5)
W0315 00:18:07.051] I0315 00:18:07.050618   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:07.052] I0315 00:18:07.050994   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:18:07.281] deployment.apps/nginx2 created
W0315 00:18:07.382] I0315 00:18:07.293205   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552609076-31960", Name:"nginx2", UID:"ceb96af6-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1921", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-78cb9c866 to 3
W0315 00:18:07.383] I0315 00:18:07.303917   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609076-31960", Name:"nginx2-78cb9c866", UID:"cebb51fa-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1922", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-r9gf4
W0315 00:18:07.384] I0315 00:18:07.319188   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609076-31960", Name:"nginx2-78cb9c866", UID:"cebb51fa-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1922", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-m629j
... skipping 17 lines ...
W0315 00:18:09.413] I0315 00:18:09.052823   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:18:09.415] I0315 00:18:09.326007   59005 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552609076-31960", Name:"nginx-deployment", UID:"cf58b4bf-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1970", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5bfd55c857 to 1
W0315 00:18:09.416] I0315 00:18:09.338686   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609076-31960", Name:"nginx-deployment-5bfd55c857", UID:"cff10ab7-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1971", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5bfd55c857-wcmhc
I0315 00:18:09.541] apps.sh:337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0315 00:18:09.771] (Bapps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0315 00:18:10.276] (Bdeployment.extensions/nginx-deployment image updated
W0315 00:18:10.377] error: unable to find container named "redis"
W0315 00:18:10.378] I0315 00:18:10.054372   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:10.379] I0315 00:18:10.054961   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:18:10.527] apps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 00:18:10.778] (Bapps.sh:344: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0315 00:18:11.114] (Bdeployment.apps/nginx-deployment image updated
W0315 00:18:11.216] I0315 00:18:11.055544   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
... skipping 129 lines ...
I0315 00:18:22.656] Namespace:    namespace-1552609099-18190
I0315 00:18:22.657] Selector:     app=guestbook,tier=frontend
I0315 00:18:22.657] Labels:       app=guestbook
I0315 00:18:22.657]               tier=frontend
I0315 00:18:22.657] Annotations:  <none>
I0315 00:18:22.657] Replicas:     3 current / 3 desired
I0315 00:18:22.657] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:22.657] Pod Template:
I0315 00:18:22.657]   Labels:  app=guestbook
I0315 00:18:22.658]            tier=frontend
I0315 00:18:22.658]   Containers:
I0315 00:18:22.658]    php-redis:
I0315 00:18:22.658]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0315 00:18:22.762] Namespace:    namespace-1552609099-18190
I0315 00:18:22.762] Selector:     app=guestbook,tier=frontend
I0315 00:18:22.762] Labels:       app=guestbook
I0315 00:18:22.762]               tier=frontend
I0315 00:18:22.763] Annotations:  <none>
I0315 00:18:22.763] Replicas:     3 current / 3 desired
I0315 00:18:22.763] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:22.763] Pod Template:
I0315 00:18:22.763]   Labels:  app=guestbook
I0315 00:18:22.763]            tier=frontend
I0315 00:18:22.763]   Containers:
I0315 00:18:22.763]    php-redis:
I0315 00:18:22.763]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0315 00:18:22.862] Namespace:    namespace-1552609099-18190
I0315 00:18:22.863] Selector:     app=guestbook,tier=frontend
I0315 00:18:22.863] Labels:       app=guestbook
I0315 00:18:22.863]               tier=frontend
I0315 00:18:22.863] Annotations:  <none>
I0315 00:18:22.863] Replicas:     3 current / 3 desired
I0315 00:18:22.863] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:22.863] Pod Template:
I0315 00:18:22.863]   Labels:  app=guestbook
I0315 00:18:22.863]            tier=frontend
I0315 00:18:22.864]   Containers:
I0315 00:18:22.864]    php-redis:
I0315 00:18:22.864]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0315 00:18:22.982] Namespace:    namespace-1552609099-18190
I0315 00:18:22.982] Selector:     app=guestbook,tier=frontend
I0315 00:18:22.982] Labels:       app=guestbook
I0315 00:18:22.982]               tier=frontend
I0315 00:18:22.982] Annotations:  <none>
I0315 00:18:22.982] Replicas:     3 current / 3 desired
I0315 00:18:22.983] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:22.983] Pod Template:
I0315 00:18:22.983]   Labels:  app=guestbook
I0315 00:18:22.983]            tier=frontend
I0315 00:18:22.983]   Containers:
I0315 00:18:22.983]    php-redis:
I0315 00:18:22.983]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 20 lines ...
I0315 00:18:23.188] Namespace:    namespace-1552609099-18190
I0315 00:18:23.188] Selector:     app=guestbook,tier=frontend
I0315 00:18:23.188] Labels:       app=guestbook
I0315 00:18:23.188]               tier=frontend
I0315 00:18:23.188] Annotations:  <none>
I0315 00:18:23.188] Replicas:     3 current / 3 desired
I0315 00:18:23.188] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:23.189] Pod Template:
I0315 00:18:23.189]   Labels:  app=guestbook
I0315 00:18:23.189]            tier=frontend
I0315 00:18:23.189]   Containers:
I0315 00:18:23.189]    php-redis:
I0315 00:18:23.189]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0315 00:18:23.231] Namespace:    namespace-1552609099-18190
I0315 00:18:23.231] Selector:     app=guestbook,tier=frontend
I0315 00:18:23.231] Labels:       app=guestbook
I0315 00:18:23.231]               tier=frontend
I0315 00:18:23.231] Annotations:  <none>
I0315 00:18:23.231] Replicas:     3 current / 3 desired
I0315 00:18:23.232] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:23.232] Pod Template:
I0315 00:18:23.232]   Labels:  app=guestbook
I0315 00:18:23.232]            tier=frontend
I0315 00:18:23.232]   Containers:
I0315 00:18:23.232]    php-redis:
I0315 00:18:23.232]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0315 00:18:23.333] Namespace:    namespace-1552609099-18190
I0315 00:18:23.333] Selector:     app=guestbook,tier=frontend
I0315 00:18:23.333] Labels:       app=guestbook
I0315 00:18:23.334]               tier=frontend
I0315 00:18:23.334] Annotations:  <none>
I0315 00:18:23.334] Replicas:     3 current / 3 desired
I0315 00:18:23.334] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:23.334] Pod Template:
I0315 00:18:23.334]   Labels:  app=guestbook
I0315 00:18:23.334]            tier=frontend
I0315 00:18:23.334]   Containers:
I0315 00:18:23.335]    php-redis:
I0315 00:18:23.335]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0315 00:18:23.436] Namespace:    namespace-1552609099-18190
I0315 00:18:23.436] Selector:     app=guestbook,tier=frontend
I0315 00:18:23.437] Labels:       app=guestbook
I0315 00:18:23.437]               tier=frontend
I0315 00:18:23.437] Annotations:  <none>
I0315 00:18:23.437] Replicas:     3 current / 3 desired
I0315 00:18:23.437] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:23.437] Pod Template:
I0315 00:18:23.437]   Labels:  app=guestbook
I0315 00:18:23.437]            tier=frontend
I0315 00:18:23.437]   Containers:
I0315 00:18:23.438]    php-redis:
I0315 00:18:23.438]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 193 lines ...
I0315 00:18:28.740] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0315 00:18:28.841] I0315 00:18:28.068670   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:28.841] I0315 00:18:28.068993   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:18:28.842] I0315 00:18:28.175885   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609099-18190", Name:"frontend", UID:"db2d7ce1-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hqrlb
W0315 00:18:28.842] I0315 00:18:28.182284   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609099-18190", Name:"frontend", UID:"db2d7ce1-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s9qtx
W0315 00:18:28.842] I0315 00:18:28.182327   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552609099-18190", Name:"frontend", UID:"db2d7ce1-46b7-11e9-9adf-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-w4r42
W0315 00:18:28.842] Error: required flag(s) "max" not set
W0315 00:18:28.842] 
W0315 00:18:28.843] 
W0315 00:18:28.843] Examples:
W0315 00:18:28.843]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0315 00:18:28.843]   kubectl autoscale deployment foo --min=2 --max=10
W0315 00:18:28.843]   
... skipping 97 lines ...
I0315 00:18:32.177] (Bstatefulset.apps/nginx rolled back
W0315 00:18:32.277] I0315 00:18:32.070524   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:32.278] I0315 00:18:32.070774   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:18:32.379] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0315 00:18:32.383] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 00:18:32.494] (BSuccessful
I0315 00:18:32.494] message:error: unable to find specified revision 1000000 in history
I0315 00:18:32.494] has:unable to find specified revision
I0315 00:18:32.588] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0315 00:18:32.687] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 00:18:32.803] (Bstatefulset.apps/nginx rolled back
I0315 00:18:32.912] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0315 00:18:33.014] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 62 lines ...
I0315 00:18:34.976] Name:         mock
I0315 00:18:34.976] Namespace:    namespace-1552609113-2895
I0315 00:18:34.976] Selector:     app=mock
I0315 00:18:34.976] Labels:       app=mock
I0315 00:18:34.976] Annotations:  <none>
I0315 00:18:34.976] Replicas:     1 current / 1 desired
I0315 00:18:34.976] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:34.976] Pod Template:
I0315 00:18:34.976]   Labels:  app=mock
I0315 00:18:34.976]   Containers:
I0315 00:18:34.976]    mock-container:
I0315 00:18:34.976]     Image:        k8s.gcr.io/pause:2.0
I0315 00:18:34.977]     Port:         9949/TCP
... skipping 62 lines ...
I0315 00:18:37.330] Name:         mock
I0315 00:18:37.330] Namespace:    namespace-1552609113-2895
I0315 00:18:37.330] Selector:     app=mock
I0315 00:18:37.330] Labels:       app=mock
I0315 00:18:37.330] Annotations:  <none>
I0315 00:18:37.330] Replicas:     1 current / 1 desired
I0315 00:18:37.330] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:37.331] Pod Template:
I0315 00:18:37.331]   Labels:  app=mock
I0315 00:18:37.331]   Containers:
I0315 00:18:37.331]    mock-container:
I0315 00:18:37.331]     Image:        k8s.gcr.io/pause:2.0
I0315 00:18:37.331]     Port:         9949/TCP
... skipping 60 lines ...
I0315 00:18:39.718] Name:         mock
I0315 00:18:39.718] Namespace:    namespace-1552609113-2895
I0315 00:18:39.718] Selector:     app=mock
I0315 00:18:39.718] Labels:       app=mock
I0315 00:18:39.718] Annotations:  <none>
I0315 00:18:39.718] Replicas:     1 current / 1 desired
I0315 00:18:39.718] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:39.719] Pod Template:
I0315 00:18:39.719]   Labels:  app=mock
I0315 00:18:39.719]   Containers:
I0315 00:18:39.719]    mock-container:
I0315 00:18:39.719]     Image:        k8s.gcr.io/pause:2.0
I0315 00:18:39.719]     Port:         9949/TCP
... skipping 46 lines ...
I0315 00:18:42.036] Namespace:    namespace-1552609113-2895
I0315 00:18:42.036] Selector:     app=mock
I0315 00:18:42.036] Labels:       app=mock
I0315 00:18:42.036]               status=replaced
I0315 00:18:42.037] Annotations:  <none>
I0315 00:18:42.037] Replicas:     1 current / 1 desired
I0315 00:18:42.037] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:42.037] Pod Template:
I0315 00:18:42.037]   Labels:  app=mock
I0315 00:18:42.037]   Containers:
I0315 00:18:42.037]    mock-container:
I0315 00:18:42.038]     Image:        k8s.gcr.io/pause:2.0
I0315 00:18:42.038]     Port:         9949/TCP
... skipping 11 lines ...
I0315 00:18:42.047] Namespace:    namespace-1552609113-2895
I0315 00:18:42.047] Selector:     app=mock2
I0315 00:18:42.048] Labels:       app=mock2
I0315 00:18:42.048]               status=replaced
I0315 00:18:42.048] Annotations:  <none>
I0315 00:18:42.048] Replicas:     1 current / 1 desired
I0315 00:18:42.048] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 00:18:42.048] Pod Template:
I0315 00:18:42.048]   Labels:  app=mock2
I0315 00:18:42.048]   Containers:
I0315 00:18:42.048]    mock-container:
I0315 00:18:42.048]     Image:        k8s.gcr.io/pause:2.0
I0315 00:18:42.048]     Port:         9949/TCP
... skipping 119 lines ...
I0315 00:18:47.722] (Bpersistentvolume "pv0001" deleted
W0315 00:18:47.823] I0315 00:18:46.078384   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:47.824] I0315 00:18:46.078657   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:18:47.824] I0315 00:18:46.314345   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552609113-2895", Name:"mock", UID:"e5fd846f-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"2684", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-9dfgp
W0315 00:18:47.825] I0315 00:18:47.078973   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:47.825] I0315 00:18:47.079161   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 00:18:47.825] E0315 00:18:47.547056   59005 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
W0315 00:18:47.907] E0315 00:18:47.906699   59005 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0315 00:18:48.008] persistentvolume/pv0002 created
I0315 00:18:48.008] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0315 00:18:48.093] (Bpersistentvolume "pv0002" deleted
W0315 00:18:48.194] I0315 00:18:48.079470   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:48.194] I0315 00:18:48.079679   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:18:48.295] persistentvolume/pv0003 created
... skipping 500 lines ...
I0315 00:18:53.545] yes
I0315 00:18:53.546] has:the server doesn't have a resource type
I0315 00:18:53.624] Successful
I0315 00:18:53.625] message:yes
I0315 00:18:53.625] has:yes
I0315 00:18:53.700] Successful
I0315 00:18:53.700] message:error: --subresource can not be used with NonResourceURL
I0315 00:18:53.700] has:subresource can not be used with NonResourceURL
I0315 00:18:53.783] Successful
I0315 00:18:53.868] Successful
I0315 00:18:53.868] message:yes
I0315 00:18:53.869] 0
I0315 00:18:53.869] has:0
... skipping 6 lines ...
I0315 00:18:54.072] role.rbac.authorization.k8s.io/testing-R reconciled
I0315 00:18:54.175] legacy-script.sh:769: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0315 00:18:54.275] (Blegacy-script.sh:770: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0315 00:18:54.374] (Blegacy-script.sh:771: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0315 00:18:54.477] (Blegacy-script.sh:772: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0315 00:18:54.562] (BSuccessful
I0315 00:18:54.562] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0315 00:18:54.563] has:only rbac.authorization.k8s.io/v1 is supported
I0315 00:18:54.657] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0315 00:18:54.665] role.rbac.authorization.k8s.io "testing-R" deleted
I0315 00:18:54.677] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0315 00:18:54.688] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0315 00:18:54.700] Recording: run_retrieve_multiple_tests
... skipping 48 lines ...
I0315 00:18:55.881] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0315 00:18:55.883] +++ working dir: /go/src/k8s.io/kubernetes
I0315 00:18:55.886] +++ command: run_kubectl_explain_tests
I0315 00:18:55.897] +++ [0315 00:18:55] Testing kubectl(v1:explain)
W0315 00:18:55.998] I0315 00:18:55.751103   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552609135-2293", Name:"cassandra", UID:"eb5f1238-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"2765", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-qvjkw
W0315 00:18:55.998] I0315 00:18:55.767053   59005 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552609135-2293", Name:"cassandra", UID:"eb5f1238-46b7-11e9-9adf-0242ac110002", APIVersion:"v1", ResourceVersion:"2765", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-tqwcn
W0315 00:18:55.998] E0315 00:18:55.771962   59005 replica_set.go:450] Sync "namespace-1552609135-2293/cassandra" failed with replicationcontrollers "cassandra" not found
W0315 00:18:56.084] I0315 00:18:56.084039   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:18:56.085] I0315 00:18:56.084256   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:18:56.185] KIND:     Pod
I0315 00:18:56.186] VERSION:  v1
I0315 00:18:56.186] 
I0315 00:18:56.186] DESCRIPTION:
... skipping 1153 lines ...
I0315 00:19:23.353] message:node/127.0.0.1 already uncordoned (dry run)
I0315 00:19:23.353] has:already uncordoned
I0315 00:19:23.451] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0315 00:19:23.535] (Bnode/127.0.0.1 labeled
I0315 00:19:23.633] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0315 00:19:23.707] (BSuccessful
I0315 00:19:23.708] message:error: cannot specify both a node name and a --selector option
I0315 00:19:23.708] See 'kubectl drain -h' for help and examples
I0315 00:19:23.708] has:cannot specify both a node name
I0315 00:19:23.781] Successful
I0315 00:19:23.782] message:error: USAGE: cordon NODE [flags]
I0315 00:19:23.782] See 'kubectl cordon -h' for help and examples
I0315 00:19:23.782] has:error\: USAGE\: cordon NODE
I0315 00:19:23.859] node/127.0.0.1 already uncordoned
I0315 00:19:23.939] Successful
I0315 00:19:23.939] message:error: You must provide one or more resources by argument or filename.
I0315 00:19:23.939] Example resource specifications include:
I0315 00:19:23.939]    '-f rsrc.yaml'
I0315 00:19:23.939]    '--filename=rsrc.json'
I0315 00:19:23.940]    '<resource> <name>'
I0315 00:19:23.940]    '<resource>'
I0315 00:19:23.940] has:must provide one or more resources
... skipping 21 lines ...
I0315 00:19:24.458] Successful
I0315 00:19:24.458] message:The following compatible plugins are available:
I0315 00:19:24.458] 
I0315 00:19:24.459] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0315 00:19:24.459]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0315 00:19:24.459] 
I0315 00:19:24.459] error: one plugin warning was found
I0315 00:19:24.459] has:kubectl-version overwrites existing command: "kubectl version"
I0315 00:19:24.532] Successful
I0315 00:19:24.533] message:The following compatible plugins are available:
I0315 00:19:24.533] 
I0315 00:19:24.533] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0315 00:19:24.533] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0315 00:19:24.533]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0315 00:19:24.533] 
I0315 00:19:24.533] error: one plugin warning was found
I0315 00:19:24.534] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0315 00:19:24.609] Successful
I0315 00:19:24.609] message:The following compatible plugins are available:
I0315 00:19:24.609] 
I0315 00:19:24.609] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0315 00:19:24.609] has:plugins are available
I0315 00:19:24.686] Successful
I0315 00:19:24.687] message:
I0315 00:19:24.687] error: unable to find any kubectl plugins in your PATH
I0315 00:19:24.687] has:unable to find any kubectl plugins in your PATH
I0315 00:19:24.761] Successful
I0315 00:19:24.761] message:I am plugin foo
I0315 00:19:24.761] has:plugin foo
I0315 00:19:24.835] Successful
I0315 00:19:24.836] message:Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1218+dfa25fcc7722d2", GitCommit:"dfa25fcc7722d2c9f8c3e05bc48e822d6a956069", GitTreeState:"clean", BuildDate:"2019-03-15T00:12:24Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0315 00:19:24.931] 
I0315 00:19:24.933] +++ Running case: test-cmd.run_impersonation_tests 
I0315 00:19:24.936] +++ working dir: /go/src/k8s.io/kubernetes
I0315 00:19:24.939] +++ command: run_impersonation_tests
I0315 00:19:24.950] +++ [0315 00:19:24] Testing impersonation
I0315 00:19:25.023] Successful
I0315 00:19:25.024] message:error: requesting groups or user-extra for  without impersonating a user
I0315 00:19:25.024] has:without impersonating a user
W0315 00:19:25.124] I0315 00:19:25.100329   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 00:19:25.125] I0315 00:19:25.100685   55707 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 00:19:25.225] certificatesigningrequest.certificates.k8s.io/foo created
I0315 00:19:25.312] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0315 00:19:25.406] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
... skipping 52 lines ...
W0315 00:19:28.853] I0315 00:19:28.847964   55707 controller.go:176] Shutting down kubernetes service endpoint reconciler
W0315 00:19:28.853] I0315 00:19:28.848055   55707 secure_serving.go:160] Stopped listening on 127.0.0.1:6443
W0315 00:19:28.854] I0315 00:19:28.849527   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.854] I0315 00:19:28.849550   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.855] I0315 00:19:28.849763   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.855] I0315 00:19:28.849771   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.856] W0315 00:19:28.850111   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.856] I0315 00:19:28.850385   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.856] I0315 00:19:28.850625   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.857] I0315 00:19:28.850725   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.857] I0315 00:19:28.850789   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.858] I0315 00:19:28.850817   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.858] I0315 00:19:28.850859   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 66 lines ...
W0315 00:19:28.871] I0315 00:19:28.852535   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.871] I0315 00:19:28.852579   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.871] I0315 00:19:28.855784   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.871] I0315 00:19:28.855798   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.872] I0315 00:19:28.855809   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.872] I0315 00:19:28.852416   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.872] W0315 00:19:28.855832   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.872] I0315 00:19:28.852615   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.872] I0315 00:19:28.855857   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.873] I0315 00:19:28.852582   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.873] I0315 00:19:28.855873   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.873] I0315 00:19:28.855888   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.873] I0315 00:19:28.855889   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 9 lines ...
W0315 00:19:28.875] I0315 00:19:28.855911   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.875] I0315 00:19:28.853272   55707 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0315 00:19:28.876] I0315 00:19:28.853324   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.876] I0315 00:19:28.855987   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.876] I0315 00:19:28.853353   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.876] I0315 00:19:28.856007   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.876] W0315 00:19:28.853362   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.877] I0315 00:19:28.853394   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.877] I0315 00:19:28.853417   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.877] I0315 00:19:28.856098   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.877] W0315 00:19:28.853431   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.878] I0315 00:19:28.853437   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.878] I0315 00:19:28.856170   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.878] W0315 00:19:28.853518   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.879] W0315 00:19:28.853583   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.879] W0315 00:19:28.853597   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.879] W0315 00:19:28.853644   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.879] W0315 00:19:28.853675   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.880] W0315 00:19:28.853680   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.880] W0315 00:19:28.853690   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.880] W0315 00:19:28.853718   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.880] W0315 00:19:28.853725   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.881] W0315 00:19:28.853745   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.881] W0315 00:19:28.853764   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.881] W0315 00:19:28.853768   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.882] W0315 00:19:28.853776   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.882] W0315 00:19:28.853804   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.882] W0315 00:19:28.853814   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.883] W0315 00:19:28.853815   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.883] W0315 00:19:28.853833   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.883] W0315 00:19:28.853846   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.883] W0315 00:19:28.853860   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.884] W0315 00:19:28.853886   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.884] W0315 00:19:28.853897   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.884] W0315 00:19:28.853907   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.885] W0315 00:19:28.853927   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.885] W0315 00:19:28.853928   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.885] W0315 00:19:28.853952   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.886] W0315 00:19:28.853968   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.886] W0315 00:19:28.853976   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.886] W0315 00:19:28.853987   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.887] W0315 00:19:28.854012   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.887] W0315 00:19:28.854011   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.887] W0315 00:19:28.854027   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.887] W0315 00:19:28.854037   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.888] W0315 00:19:28.854064   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.888] W0315 00:19:28.854110   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.888] W0315 00:19:28.854136   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.889] W0315 00:19:28.854138   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.889] W0315 00:19:28.854166   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.889] W0315 00:19:28.854169   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.889] W0315 00:19:28.854177   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.890] W0315 00:19:28.854201   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.890] W0315 00:19:28.854205   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.890] W0315 00:19:28.854216   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.891] W0315 00:19:28.854246   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.891] W0315 00:19:28.854275   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.891] I0315 00:19:28.854306   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.891] I0315 00:19:28.857187   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.892] I0315 00:19:28.854355   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.892] I0315 00:19:28.857213   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.892] I0315 00:19:28.854377   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.892] I0315 00:19:28.857238   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.892] I0315 00:19:28.854389   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.893] I0315 00:19:28.857256   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.893] I0315 00:19:28.854389   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.893] I0315 00:19:28.857276   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.893] W0315 00:19:28.854408   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.894] W0315 00:19:28.854416   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.894] W0315 00:19:28.854439   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.894] W0315 00:19:28.854450   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.895] W0315 00:19:28.854484   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.895] W0315 00:19:28.854480   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.895] W0315 00:19:28.854533   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.895] W0315 00:19:28.854550   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.896] I0315 00:19:28.854619   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.896] I0315 00:19:28.857405   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.896] I0315 00:19:28.854742   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.896] I0315 00:19:28.857436   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.896] I0315 00:19:28.854776   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.897] I0315 00:19:28.857453   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 4 lines ...
W0315 00:19:28.897] I0315 00:19:28.854810   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.898] I0315 00:19:28.857537   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.898] I0315 00:19:28.854849   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.898] I0315 00:19:28.857558   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.898] I0315 00:19:28.854890   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.899] I0315 00:19:28.857577   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.899] W0315 00:19:28.854973   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.899] I0315 00:19:28.854970   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.899] I0315 00:19:28.857607   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.900] W0315 00:19:28.855017   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.900] W0315 00:19:28.855036   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.900] W0315 00:19:28.855045   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.901] W0315 00:19:28.855078   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.901] W0315 00:19:28.855090   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.901] W0315 00:19:28.855139   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.901] W0315 00:19:28.855181   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.902] W0315 00:19:28.855212   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.902] W0315 00:19:28.855255   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.902] W0315 00:19:28.855266   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.903] W0315 00:19:28.855304   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.903] I0315 00:19:28.855303   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.903] I0315 00:19:28.857957   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.903] W0315 00:19:28.855323   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.904] I0315 00:19:28.855353   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.904] I0315 00:19:28.857994   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.904] W0315 00:19:28.855409   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.904] I0315 00:19:28.855557   55707 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0315 00:19:28.904] W0315 00:19:28.855566   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.905] W0315 00:19:28.855638   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.905] W0315 00:19:28.855642   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.905] I0315 00:19:28.855714   55707 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 00:19:28.906] E0315 00:19:28.855745   55707 controller.go:179] Get https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:6443: connect: connection refused
W0315 00:19:28.906] W0315 00:19:28.855768   55707 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 00:19:28.913] + make test-integration
I0315 00:19:29.014] No resources found
I0315 00:19:29.014] No resources found
I0315 00:19:29.014] +++ [0315 00:19:28] TESTS PASSED
I0315 00:19:29.014] junit report dir: /workspace/artifacts
I0315 00:19:29.014] +++ [0315 00:19:28] Clean up complete
... skipping 32 lines ...
I0315 00:33:43.078] ok  	k8s.io/kubernetes/test/integration/openshift	0.904s
I0315 00:33:43.078] ok  	k8s.io/kubernetes/test/integration/pods	11.783s
I0315 00:33:43.078] ok  	k8s.io/kubernetes/test/integration/quota	9.630s
I0315 00:33:43.078] ok  	k8s.io/kubernetes/test/integration/replicaset	64.555s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/replicationcontroller	56.133s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/scale	6.246s
I0315 00:33:43.079] FAIL	k8s.io/kubernetes/test/integration/scheduler	561.371s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/scheduler_perf	1.210s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/secrets	4.656s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/serviceaccount	67.938s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/serving	52.804s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/statefulset	12.079s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/storageclasses	4.921s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/tls	7.659s
I0315 00:33:43.079] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.222s
I0315 00:33:43.080] ok  	k8s.io/kubernetes/test/integration/volume	94.212s
I0315 00:33:43.080] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	149.369s
I0315 00:33:58.127] +++ [0315 00:33:58] Saved JUnit XML test report to /workspace/artifacts/junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190315-001939.xml
I0315 00:33:58.130] Makefile:184: recipe for target 'test' failed
I0315 00:33:58.141] +++ [0315 00:33:58] Cleaning up etcd
W0315 00:33:58.242] make[1]: *** [test] Error 1
W0315 00:33:58.242] !!! [0315 00:33:58] Call tree:
W0315 00:33:58.242] !!! [0315 00:33:58]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0315 00:33:58.410] +++ [0315 00:33:58] Integration test cleanup complete
I0315 00:33:58.410] Makefile:203: recipe for target 'test-integration' failed
W0315 00:33:58.511] make: *** [test-integration] Error 1
W0315 00:34:01.206] Traceback (most recent call last):
W0315 00:34:01.207]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0315 00:34:01.207]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0315 00:34:01.207]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0315 00:34:01.207]     check(*cmd)
W0315 00:34:01.207]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0315 00:34:01.207]     subprocess.check_call(cmd)
W0315 00:34:01.207]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0315 00:34:01.256]     raise CalledProcessError(retcode, cmd)
W0315 00:34:01.257] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20190125-cc5d6ecff3', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0315 00:34:01.261] Command failed
I0315 00:34:01.261] process 528 exited with code 1 after 27.9m
E0315 00:34:01.261] FAIL: ci-kubernetes-integration-master
I0315 00:34:01.262] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0315 00:34:01.852] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0315 00:34:01.909] process 127704 exited with code 0 after 0.0m
I0315 00:34:01.909] Call:  gcloud config get-value account
I0315 00:34:02.240] process 127716 exited with code 0 after 0.0m
I0315 00:34:02.241] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0315 00:34:02.241] Upload result and artifacts...
I0315 00:34:02.241] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/9484
I0315 00:34:02.242] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/9484/artifacts
W0315 00:34:03.438] CommandException: One or more URLs matched no objects.
E0315 00:34:03.586] Command failed
I0315 00:34:03.586] process 127728 exited with code 1 after 0.0m
W0315 00:34:03.587] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/9484/artifacts not exist yet
I0315 00:34:03.587] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/9484/artifacts
I0315 00:34:10.804] process 127870 exited with code 0 after 0.1m
W0315 00:34:10.805] metadata path /workspace/_artifacts/metadata.json does not exist
W0315 00:34:10.805] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...