This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 648 succeeded
Started2019-03-15 18:22
Elapsed27m49s
Revision
Buildergke-prow-containerd-pool-99179761-104h
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/20ac15b3-bb3f-498a-9595-81d71567c5ee/targets/test'}}
pod1f1fce53-474f-11e9-ab9f-0a580a6c0a8e
resultstorehttps://source.cloud.google.com/results/invocations/20ac15b3-bb3f-498a-9595-81d71567c5ee/targets/test
infra-commit9eea277b7
pod1f1fce53-474f-11e9-ab9f-0a580a6c0a8e
repok8s.io/kubernetes
repo-commitb0494b081d5c97c21115cd2921f7c5b536470591
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/deployment TestDeploymentAvailableCondition 6.34s

go test -v k8s.io/kubernetes/test/integration/deployment -run TestDeploymentAvailableCondition$
I0315 18:41:51.232923  121247 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0315 18:41:51.232948  121247 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0315 18:41:51.232957  121247 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0315 18:41:51.232968  121247 master.go:233] Using reconciler: 
I0315 18:41:51.234852  121247 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.234976  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.235024  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.235085  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.235153  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.235591  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.235665  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.235741  121247 store.go:1319] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0315 18:41:51.235779  121247 reflector.go:161] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0315 18:41:51.235777  121247 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.235995  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.236010  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.236041  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.236098  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.236431  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.236475  121247 store.go:1319] Monitoring events count at <storage-prefix>//events
I0315 18:41:51.236484  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.236506  121247 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.236579  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.236592  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.236621  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.236665  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.236907  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.236960  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.237008  121247 store.go:1319] Monitoring limitranges count at <storage-prefix>//limitranges
I0315 18:41:51.237045  121247 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.237100  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.237110  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.237135  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.237145  121247 reflector.go:161] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0315 18:41:51.237173  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.237513  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.237593  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.237630  121247 store.go:1319] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0315 18:41:51.237705  121247 reflector.go:161] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0315 18:41:51.237777  121247 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.237845  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.237867  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.237895  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.237941  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.238179  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.238237  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.238398  121247 store.go:1319] Monitoring secrets count at <storage-prefix>//secrets
I0315 18:41:51.238453  121247 reflector.go:161] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0315 18:41:51.238781  121247 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.239299  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.239322  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.239354  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.239399  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.239758  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.239791  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.239937  121247 store.go:1319] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0315 18:41:51.239959  121247 reflector.go:161] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0315 18:41:51.240091  121247 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.240176  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.240211  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.240239  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.240288  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.240531  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.240721  121247 store.go:1319] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0315 18:41:51.240762  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.240821  121247 reflector.go:161] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0315 18:41:51.240852  121247 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.240922  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.240938  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.240968  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.241058  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.241384  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.241485  121247 store.go:1319] Monitoring configmaps count at <storage-prefix>//configmaps
I0315 18:41:51.241564  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.241627  121247 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.241694  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.241712  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.241745  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.241790  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.241788  121247 reflector.go:161] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0315 18:41:51.242502  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.242602  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.242701  121247 store.go:1319] Monitoring namespaces count at <storage-prefix>//namespaces
I0315 18:41:51.242750  121247 reflector.go:161] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0315 18:41:51.242894  121247 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.242969  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.242980  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.243012  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.243067  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.243353  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.243476  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.243624  121247 store.go:1319] Monitoring endpoints count at <storage-prefix>//endpoints
I0315 18:41:51.243719  121247 reflector.go:161] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0315 18:41:51.243809  121247 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.243872  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.243884  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.243912  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.243952  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.244675  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.244826  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.245014  121247 store.go:1319] Monitoring nodes count at <storage-prefix>//nodes
I0315 18:41:51.245095  121247 reflector.go:161] Listing and watching *core.Node from storage/cacher.go:/nodes
I0315 18:41:51.245168  121247 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.245257  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.245287  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.245332  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.245405  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.245675  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.245747  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.245819  121247 store.go:1319] Monitoring pods count at <storage-prefix>//pods
I0315 18:41:51.245870  121247 reflector.go:161] Listing and watching *core.Pod from storage/cacher.go:/pods
I0315 18:41:51.246014  121247 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.246223  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.246284  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.246355  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.246407  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.246945  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.247015  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.247192  121247 store.go:1319] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0315 18:41:51.247281  121247 reflector.go:161] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0315 18:41:51.247439  121247 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.247537  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.247587  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.247646  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.247694  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.248222  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.248305  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.248434  121247 store.go:1319] Monitoring services count at <storage-prefix>//services
I0315 18:41:51.248469  121247 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.248514  121247 reflector.go:161] Listing and watching *core.Service from storage/cacher.go:/services
I0315 18:41:51.248565  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.248577  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.248604  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.248660  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.248991  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.249060  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.249097  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.249110  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.249136  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.249244  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.249523  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.249560  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.249709  121247 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.249800  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.249816  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.249843  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.249875  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.250076  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.250102  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.250161  121247 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0315 18:41:51.250289  121247 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0315 18:41:51.260866  121247 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0315 18:41:51.260904  121247 master.go:425] Enabling API group "authentication.k8s.io".
I0315 18:41:51.260924  121247 master.go:425] Enabling API group "authorization.k8s.io".
I0315 18:41:51.261059  121247 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.261165  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.261184  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.261236  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.261307  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.261705  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.261795  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.261915  121247 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0315 18:41:51.262050  121247 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0315 18:41:51.262091  121247 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.262179  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.262221  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.262255  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.262315  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.262584  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.262663  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.262788  121247 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0315 18:41:51.262868  121247 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0315 18:41:51.262937  121247 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.263020  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.263048  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.263083  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.263129  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.263452  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.263550  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.263570  121247 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0315 18:41:51.263593  121247 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0315 18:41:51.264017  121247 master.go:425] Enabling API group "autoscaling".
I0315 18:41:51.264191  121247 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.264685  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.264708  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.264741  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.264776  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.265132  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.265233  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.265308  121247 store.go:1319] Monitoring jobs.batch count at <storage-prefix>//jobs
I0315 18:41:51.265360  121247 reflector.go:161] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0315 18:41:51.265466  121247 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.265537  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.265550  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.265581  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.265629  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.265976  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.266129  121247 store.go:1319] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0315 18:41:51.266153  121247 master.go:425] Enabling API group "batch".
I0315 18:41:51.266163  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.266226  121247 reflector.go:161] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0315 18:41:51.266331  121247 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.266400  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.266421  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.266451  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.266500  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.266759  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.266845  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.266900  121247 store.go:1319] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0315 18:41:51.266923  121247 master.go:425] Enabling API group "certificates.k8s.io".
I0315 18:41:51.267007  121247 reflector.go:161] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0315 18:41:51.267077  121247 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.267172  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.267236  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.267304  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.267389  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.268053  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.268136  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.268227  121247 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0315 18:41:51.268255  121247 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0315 18:41:51.268540  121247 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.268621  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.268642  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.268671  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.268718  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.269142  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.269230  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.269375  121247 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0315 18:41:51.269422  121247 master.go:425] Enabling API group "coordination.k8s.io".
I0315 18:41:51.269445  121247 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0315 18:41:51.269635  121247 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.269718  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.269853  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.269913  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.269995  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.270644  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.270943  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.271114  121247 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0315 18:41:51.271139  121247 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0315 18:41:51.271312  121247 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.271392  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.271404  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.271434  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.271482  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.272304  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.272461  121247 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0315 18:41:51.272502  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.272546  121247 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0315 18:41:51.273187  121247 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.273318  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.273340  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.273371  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.273424  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.273688  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.273822  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.274023  121247 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0315 18:41:51.274088  121247 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0315 18:41:51.274396  121247 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.274472  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.274484  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.274512  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.274622  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.274867  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.274988  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.275069  121247 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0315 18:41:51.275106  121247 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0315 18:41:51.275227  121247 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.275317  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.275338  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.275368  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.275410  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.275901  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.275944  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.276060  121247 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0315 18:41:51.276148  121247 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0315 18:41:51.276383  121247 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.276498  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.276549  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.276603  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.276681  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.277034  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.277124  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.277178  121247 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0315 18:41:51.277345  121247 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0315 18:41:51.277338  121247 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.277969  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.277989  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.278024  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.278065  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.278366  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.278438  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.279165  121247 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0315 18:41:51.279191  121247 master.go:425] Enabling API group "extensions".
I0315 18:41:51.279284  121247 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0315 18:41:51.279353  121247 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.279431  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.279456  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.279487  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.280422  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.280732  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.280811  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.280867  121247 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0315 18:41:51.280936  121247 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0315 18:41:51.281053  121247 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.281163  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.281215  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.281285  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.281338  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.281538  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.281666  121247 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0315 18:41:51.281757  121247 master.go:425] Enabling API group "networking.k8s.io".
I0315 18:41:51.281768  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.281810  121247 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0315 18:41:51.281802  121247 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.281882  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.281900  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.281926  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.281971  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.282179  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.282337  121247 store.go:1319] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0315 18:41:51.282363  121247 master.go:425] Enabling API group "node.k8s.io".
I0315 18:41:51.282494  121247 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.282576  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.282594  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.282627  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.282702  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.282731  121247 reflector.go:161] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0315 18:41:51.282882  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.283156  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.283508  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.283681  121247 store.go:1319] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0315 18:41:51.283763  121247 reflector.go:161] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0315 18:41:51.283916  121247 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.284174  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.284189  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.284233  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.284287  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.284534  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.284712  121247 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0315 18:41:51.284732  121247 master.go:425] Enabling API group "policy".
I0315 18:41:51.284764  121247 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.284836  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.284854  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.284883  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.284958  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.285022  121247 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0315 18:41:51.285192  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.287897  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.288021  121247 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0315 18:41:51.288114  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.288141  121247 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0315 18:41:51.288476  121247 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.288564  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.289061  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.289116  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.289158  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.289419  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.289494  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.289644  121247 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0315 18:41:51.289698  121247 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.289723  121247 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0315 18:41:51.289778  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.289795  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.289824  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.289874  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.290497  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.290593  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.290728  121247 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0315 18:41:51.290756  121247 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0315 18:41:51.290859  121247 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.290928  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.291092  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.291347  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.291538  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.291770  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.291852  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.291877  121247 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0315 18:41:51.291918  121247 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0315 18:41:51.291929  121247 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.291996  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.292012  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.292049  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.292094  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.292347  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.292435  121247 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0315 18:41:51.292468  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.292541  121247 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0315 18:41:51.292602  121247 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.292684  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.292709  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.292753  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.292837  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.293621  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.293718  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.293792  121247 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0315 18:41:51.293849  121247 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0315 18:41:51.293823  121247 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.293970  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.294069  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.294115  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.294155  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.294422  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.294494  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.294580  121247 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0315 18:41:51.294696  121247 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0315 18:41:51.294714  121247 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.294774  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.294799  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.294827  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.294871  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.295128  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.295232  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.295500  121247 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0315 18:41:51.295536  121247 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0315 18:41:51.295560  121247 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0315 18:41:51.297535  121247 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.297653  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.297696  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.297757  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.297798  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.298107  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.298189  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.298297  121247 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0315 18:41:51.298348  121247 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0315 18:41:51.298429  121247 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.298506  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.298526  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.298557  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.298603  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.299066  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.299097  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.299179  121247 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0315 18:41:51.299215  121247 master.go:425] Enabling API group "scheduling.k8s.io".
I0315 18:41:51.299340  121247 master.go:417] Skipping disabled API group "settings.k8s.io".
I0315 18:41:51.299389  121247 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0315 18:41:51.300276  121247 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.300366  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.300386  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.300418  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.300464  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.300804  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.300847  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.300959  121247 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0315 18:41:51.300989  121247 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0315 18:41:51.301028  121247 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.301118  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.301207  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.301285  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.301355  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.301605  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.301737  121247 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0315 18:41:51.301772  121247 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.301811  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.301841  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.301860  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.301893  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.301944  121247 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0315 18:41:51.302074  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.302310  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.302372  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.302402  121247 store.go:1319] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0315 18:41:51.302432  121247 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.302467  121247 reflector.go:161] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0315 18:41:51.302493  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.302503  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.302530  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.302594  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.302783  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.302877  121247 store.go:1319] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0315 18:41:51.302983  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.303009  121247 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.303080  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.303093  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.303121  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.303168  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.303262  121247 reflector.go:161] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0315 18:41:51.303443  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.303516  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.303524  121247 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0315 18:41:51.303541  121247 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0315 18:41:51.303556  121247 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.303623  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.303634  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.303681  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.303779  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.304368  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.304574  121247 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0315 18:41:51.304599  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.304600  121247 master.go:425] Enabling API group "storage.k8s.io".
I0315 18:41:51.304618  121247 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0315 18:41:51.304745  121247 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.304822  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.304838  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.304868  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.304924  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.305165  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.305332  121247 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0315 18:41:51.305397  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.305464  121247 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.305498  121247 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0315 18:41:51.305533  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.305558  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.305585  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.305634  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.305853  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.305924  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.305999  121247 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0315 18:41:51.306064  121247 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0315 18:41:51.306152  121247 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.306247  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.306276  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.306304  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.306359  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.306578  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.306701  121247 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0315 18:41:51.306815  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.306839  121247 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.306904  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.306922  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.306948  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.306994  121247 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0315 18:41:51.307108  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.307892  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.307969  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.308002  121247 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0315 18:41:51.308042  121247 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0315 18:41:51.308145  121247 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.308252  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.308277  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.308307  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.308355  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.308796  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.308890  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.308924  121247 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0315 18:41:51.308957  121247 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0315 18:41:51.309062  121247 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.309127  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.309139  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.309170  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.309240  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.309805  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.309897  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.309927  121247 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0315 18:41:51.310021  121247 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0315 18:41:51.310077  121247 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.310163  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.310182  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.310230  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.310292  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.310519  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.310648  121247 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0315 18:41:51.310788  121247 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.310852  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.310857  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.310923  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.310995  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.310883  121247 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0315 18:41:51.311070  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.311431  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.311575  121247 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0315 18:41:51.311596  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.311655  121247 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0315 18:41:51.311803  121247 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.311902  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.311918  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.311947  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.312015  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.312324  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.312451  121247 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0315 18:41:51.312505  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.312558  121247 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0315 18:41:51.312570  121247 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.312635  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.312646  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.312678  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.312809  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.313060  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.313177  121247 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0315 18:41:51.313377  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.313391  121247 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.313447  121247 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0315 18:41:51.313489  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.313502  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.313534  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.313580  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.313780  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.313798  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.313906  121247 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0315 18:41:51.314047  121247 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.314115  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.314129  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.314160  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.314233  121247 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0315 18:41:51.314409  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.314967  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.315091  121247 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0315 18:41:51.315190  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.315250  121247 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.315283  121247 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0315 18:41:51.315326  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.315337  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.315370  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.315412  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.315866  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.315968  121247 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0315 18:41:51.315983  121247 master.go:425] Enabling API group "apps".
I0315 18:41:51.316012  121247 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.316100  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.316113  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.316148  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.316243  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.316286  121247 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0315 18:41:51.316420  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.317010  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.317133  121247 store.go:1319] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0315 18:41:51.317166  121247 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.317259  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.317290  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.317299  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.317320  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.317354  121247 reflector.go:161] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0315 18:41:51.317391  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.317763  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.317851  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.317878  121247 reflector.go:161] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0315 18:41:51.317858  121247 store.go:1319] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0315 18:41:51.317934  121247 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0315 18:41:51.317976  121247 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f06024a8-005a-4edd-b63e-5399f29b2095", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0315 18:41:51.318178  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:51.318223  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:51.318262  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:51.318320  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.318635  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:51.318705  121247 store.go:1319] Monitoring events count at <storage-prefix>//events
I0315 18:41:51.318729  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:51.318738  121247 master.go:425] Enabling API group "events.k8s.io".
W0315 18:41:51.323621  121247 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources.
W0315 18:41:51.330772  121247 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0315 18:41:51.334697  121247 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0315 18:41:51.335652  121247 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0315 18:41:51.337826  121247 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0315 18:41:51.348314  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.348340  121247 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0315 18:41:51.348347  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.348354  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.348358  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.348503  121247 wrap.go:47] GET /healthz: (295.473µs) 500
goroutine 40271 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00df650a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00df650a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0108904c0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc005403ca8, 0xc00bdb0340, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc005403ca8, 0xc012250a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc005403ca8, 0xc012250900)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc005403ca8, 0xc012250900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01251d6e0, 0xc00cd2c3c0, 0x6258940, 0xc005403ca8, 0xc012250900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32962]
I0315 18:41:51.349585  121247 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.394744ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32964]
I0315 18:41:51.352109  121247 wrap.go:47] GET /api/v1/services: (1.114911ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32964]
I0315 18:41:51.355886  121247 wrap.go:47] GET /api/v1/services: (1.0231ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32964]
I0315 18:41:51.358223  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.358248  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.358258  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.358275  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.358409  121247 wrap.go:47] GET /healthz: (269.872µs) 500
goroutine 40112 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00df4f500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00df4f500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0107f1360, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0083678e8, 0xc009035200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0083678e8, 0xc0100ddf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0083678e8, 0xc0100dde00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0083678e8, 0xc0100dde00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01303bec0, 0xc00cd2c3c0, 0x6258940, 0xc0083678e8, 0xc0100dde00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32964]
I0315 18:41:51.359244  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.045542ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32962]
I0315 18:41:51.359947  121247 wrap.go:47] GET /api/v1/services: (1.203032ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32964]
I0315 18:41:51.360057  121247 wrap.go:47] GET /api/v1/services: (870.812µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.361411  121247 wrap.go:47] POST /api/v1/namespaces: (1.65276ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32962]
I0315 18:41:51.362689  121247 wrap.go:47] GET /api/v1/namespaces/kube-public: (891.275µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.364225  121247 wrap.go:47] POST /api/v1/namespaces: (1.2297ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.365406  121247 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (856.436µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.366945  121247 wrap.go:47] POST /api/v1/namespaces: (1.179266ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.449377  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.449420  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.449431  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.449439  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.449589  121247 wrap.go:47] GET /healthz: (355.475µs) 500
goroutine 39949 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a9eb110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a9eb110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01106e580, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00c4563b0, 0xc00ff40900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13200)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00c4563b0, 0xc00ed13200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012517380, 0xc00cd2c3c0, 0x6258940, 0xc00c4563b0, 0xc00ed13200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32966]
I0315 18:41:51.459132  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.459171  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.459182  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.459189  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.459384  121247 wrap.go:47] GET /healthz: (399.851µs) 500
goroutine 39951 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a9eb1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a9eb1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01106e620, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00c4563b8, 0xc00ff41080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13600)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00c4563b8, 0xc00ed13600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012517440, 0xc00cd2c3c0, 0x6258940, 0xc00c4563b8, 0xc00ed13600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.549608  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.549653  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.549664  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.549671  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.549836  121247 wrap.go:47] GET /healthz: (380.144µs) 500
goroutine 40282 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00df23810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00df23810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0106b16e0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0086507c0, 0xc0029a3680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0086507c0, 0xc00dbeef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0086507c0, 0xc00dbeee00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0086507c0, 0xc00dbeee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0114bbb00, 0xc00cd2c3c0, 0x6258940, 0xc0086507c0, 0xc00dbeee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32966]
I0315 18:41:51.559044  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.559090  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.559101  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.559108  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.559296  121247 wrap.go:47] GET /healthz: (374.224µs) 500
goroutine 40284 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00df239d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00df239d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0106b1780, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0086507c8, 0xc0029a3b00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0086507c8, 0xc00dbef300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0086507c8, 0xc00dbef200)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0086507c8, 0xc00dbef200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0114bbbc0, 0xc00cd2c3c0, 0x6258940, 0xc0086507c8, 0xc00dbef200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.649433  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.649479  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.649490  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.649497  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.649731  121247 wrap.go:47] GET /healthz: (434.39µs) 500
goroutine 39755 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ab86460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ab86460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011034860, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc005571058, 0xc00111bc80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc005571058, 0xc00aa19200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc005571058, 0xc00aa19100)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc005571058, 0xc00aa19100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fbca80, 0xc00cd2c3c0, 0x6258940, 0xc005571058, 0xc00aa19100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32966]
I0315 18:41:51.659108  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.659145  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.659155  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.659164  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.659357  121247 wrap.go:47] GET /healthz: (371.025µs) 500
goroutine 39757 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ab86540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ab86540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011034900, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc005571060, 0xc00d544180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc005571060, 0xc00aa19600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc005571060, 0xc00aa19500)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc005571060, 0xc00aa19500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fbcb40, 0xc00cd2c3c0, 0x6258940, 0xc005571060, 0xc00aa19500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.749487  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.749533  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.749544  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.749551  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.749712  121247 wrap.go:47] GET /healthz: (363.78µs) 500
goroutine 39953 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a9eb420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a9eb420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01106ec80, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00c4563f8, 0xc00ff41980, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13e00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00c4563f8, 0xc00ed13e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012517800, 0xc00cd2c3c0, 0x6258940, 0xc00c4563f8, 0xc00ed13e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32966]
I0315 18:41:51.759194  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.759256  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.759282  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.759292  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.759434  121247 wrap.go:47] GET /healthz: (405.674µs) 500
goroutine 39759 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ab86620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ab86620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011034a20, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc005571088, 0xc00d544600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc005571088, 0xc00aa19c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc005571088, 0xc00aa19b00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc005571088, 0xc00aa19b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fbccc0, 0xc00cd2c3c0, 0x6258940, 0xc005571088, 0xc00aa19b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.849378  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.849420  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.849430  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.849438  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.849590  121247 wrap.go:47] GET /healthz: (366.731µs) 500
goroutine 39761 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ab86700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ab86700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011034ac0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc005571090, 0xc00d544a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc005571090, 0xc00ea24000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc005571090, 0xc00aa19f00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc005571090, 0xc00aa19f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fbcd80, 0xc00cd2c3c0, 0x6258940, 0xc005571090, 0xc00aa19f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32966]
I0315 18:41:51.859067  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.859100  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.859109  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.859146  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.859322  121247 wrap.go:47] GET /healthz: (371.332µs) 500
goroutine 40339 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ab867e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ab867e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011034bc0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0055710b8, 0xc00d544f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0055710b8, 0xc00ea24700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0055710b8, 0xc00ea24600)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0055710b8, 0xc00ea24600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fbcf00, 0xc00cd2c3c0, 0x6258940, 0xc0055710b8, 0xc00ea24600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:51.949366  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.949404  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.949413  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.949419  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.949556  121247 wrap.go:47] GET /healthz: (350.862µs) 500
goroutine 40286 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00df23c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00df23c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0106b1da0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008650858, 0xc011542300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008650858, 0xc002702000)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008650858, 0xc002702000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008650858, 0xc002702000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008650858, 0xc002702000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008650858, 0xc002702000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008650858, 0xc002702000)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008650858, 0xc002702000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008650858, 0xc002702000)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008650858, 0xc002702000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008650858, 0xc002702000)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008650858, 0xc002702000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008650858, 0xc00dbeff00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008650858, 0xc00dbeff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1ea060, 0xc00cd2c3c0, 0x6258940, 0xc008650858, 0xc00dbeff00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32966]
I0315 18:41:51.959153  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:51.959190  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:51.959214  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:51.959222  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:51.959386  121247 wrap.go:47] GET /healthz: (360.249µs) 500
goroutine 40341 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ab86930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ab86930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011034e60, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0055710e0, 0xc00d545500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0055710e0, 0xc00ea24e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0055710e0, 0xc00ea24d00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0055710e0, 0xc00ea24d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fbd0e0, 0xc00cd2c3c0, 0x6258940, 0xc0055710e0, 0xc00ea24d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.049366  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:52.049398  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.049414  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:52.049422  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:52.049558  121247 wrap.go:47] GET /healthz: (374.294µs) 500
goroutine 40329 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ae94460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ae94460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0110e8d20, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc005403e70, 0xc00a970a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc005403e70, 0xc00ade2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc005403e70, 0xc00ade2700)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc005403e70, 0xc00ade2700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e2a4660, 0xc00cd2c3c0, 0x6258940, 0xc005403e70, 0xc00ade2700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32966]
I0315 18:41:52.059150  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:52.059191  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.059238  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:52.059251  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:52.059414  121247 wrap.go:47] GET /healthz: (413.498µs) 500
goroutine 40288 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00df23e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00df23e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011106140, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008650870, 0xc011542900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008650870, 0xc002702500)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008650870, 0xc002702500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008650870, 0xc002702500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008650870, 0xc002702500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008650870, 0xc002702500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008650870, 0xc002702500)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008650870, 0xc002702500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008650870, 0xc002702500)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008650870, 0xc002702500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008650870, 0xc002702500)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008650870, 0xc002702500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008650870, 0xc002702400)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008650870, 0xc002702400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1ea300, 0xc00cd2c3c0, 0x6258940, 0xc008650870, 0xc002702400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.149425  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:52.149470  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.149480  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:52.149488  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:52.149644  121247 wrap.go:47] GET /healthz: (368.541µs) 500
goroutine 40354 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00df23f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00df23f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0111062c0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008650888, 0xc011542d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008650888, 0xc002702900)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008650888, 0xc002702900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008650888, 0xc002702900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008650888, 0xc002702900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008650888, 0xc002702900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008650888, 0xc002702900)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008650888, 0xc002702900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008650888, 0xc002702900)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008650888, 0xc002702900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008650888, 0xc002702900)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008650888, 0xc002702900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008650888, 0xc002702800)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008650888, 0xc002702800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1ea3c0, 0xc00cd2c3c0, 0x6258940, 0xc008650888, 0xc002702800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32966]
I0315 18:41:52.159087  121247 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0315 18:41:52.159127  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.159151  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:52.159159  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:52.159332  121247 wrap.go:47] GET /healthz: (372.924µs) 500
goroutine 40309 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a7db650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a7db650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01100a400, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00ec78408, 0xc002be6f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00ec78408, 0xc006220500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00ec78408, 0xc006220400)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00ec78408, 0xc006220400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00aa176e0, 0xc00cd2c3c0, 0x6258940, 0xc00ec78408, 0xc006220400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.233069  121247 clientconn.go:551] parsed scheme: ""
I0315 18:41:52.233110  121247 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0315 18:41:52.233169  121247 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0315 18:41:52.233236  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:52.233677  121247 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0315 18:41:52.233759  121247 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0315 18:41:52.250691  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.250722  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:52.250730  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:52.250867  121247 wrap.go:47] GET /healthz: (1.630659ms) 500
goroutine 40311 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a7db730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a7db730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01100a540, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00ec78418, 0xc00220da20, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00ec78418, 0xc006220900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00ec78418, 0xc006220800)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00ec78418, 0xc006220800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00aa177a0, 0xc00cd2c3c0, 0x6258940, 0xc00ec78418, 0xc006220800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:32966]
I0315 18:41:52.260303  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.260337  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:52.260346  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:52.260509  121247 wrap.go:47] GET /healthz: (1.499016ms) 500
goroutine 40313 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a7db960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a7db960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01100a720, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00ec78428, 0xc00220dce0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00ec78428, 0xc006220d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00ec78428, 0xc006220c00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00ec78428, 0xc006220c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00aa17aa0, 0xc00cd2c3c0, 0x6258940, 0xc00ec78428, 0xc006220c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.349953  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.349986  121247 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0315 18:41:52.349995  121247 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0315 18:41:52.350031  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.640802ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32964]
I0315 18:41:52.350041  121247 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.411262ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33120]
I0315 18:41:52.350153  121247 wrap.go:47] GET /healthz: (973.048µs) 500
goroutine 40318 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a7dbdc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a7dbdc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01100b200, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00ec78498, 0xc00ccbc000, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00ec78498, 0xc006221900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00ec78498, 0xc006221800)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00ec78498, 0xc006221800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cd30120, 0xc00cd2c3c0, 0x6258940, 0xc00ec78498, 0xc006221800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33122]
I0315 18:41:52.350263  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.873829ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.351353  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (858.097µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33120]
I0315 18:41:52.351504  121247 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (869.484µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.351863  121247 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.384288ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32964]
I0315 18:41:52.352063  121247 storage_scheduling.go:113] created PriorityClass system-node-critical with value 2000001000
I0315 18:41:52.352742  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.059494ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33120]
I0315 18:41:52.353392  121247 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.023994ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32964]
I0315 18:41:52.353768  121247 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.716076ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.354039  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (972.304µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33120]
I0315 18:41:52.355144  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (714.514µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.355340  121247 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.414164ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32964]
I0315 18:41:52.355590  121247 storage_scheduling.go:113] created PriorityClass system-cluster-critical with value 2000000000
I0315 18:41:52.355607  121247 storage_scheduling.go:122] all system priority classes are created successfully or already exist.
I0315 18:41:52.356441  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (812.906µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.357726  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (883.482µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.358815  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (756.032µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.359518  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.359680  121247 wrap.go:47] GET /healthz: (785.904µs) 500
goroutine 40351 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ab87880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ab87880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0111babe0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0055711f8, 0xc00a012140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0055711f8, 0xc00c97e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0055711f8, 0xc00c97e500)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0055711f8, 0xc00c97e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fbdda0, 0xc00cd2c3c0, 0x6258940, 0xc0055711f8, 0xc00c97e500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.359820  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (661.174µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.361038  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (770.906µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.362986  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.514835ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.363178  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0315 18:41:52.364131  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (744.26µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.365809  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.297598ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.366073  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0315 18:41:52.367039  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (784.816µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.368602  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.202686ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.368832  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0315 18:41:52.369798  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (783.324µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.371335  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.197898ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.371544  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0315 18:41:52.372550  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (810.731µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.374296  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.372342ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.374478  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0315 18:41:52.375376  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (726.624µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.376939  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.24059ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.377141  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0315 18:41:52.378036  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (727.16µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.379758  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.338881ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.379963  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0315 18:41:52.380914  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (754.805µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.382635  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.31608ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.382850  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0315 18:41:52.383875  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (837.088µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.386318  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.049811ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.386583  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0315 18:41:52.387633  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (879.104µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.389484  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.462217ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.389704  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0315 18:41:52.390646  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (773.927µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.392376  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.347662ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.392639  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0315 18:41:52.393598  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (776.553µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.395984  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.959566ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.396281  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0315 18:41:52.397305  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (857.765µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.399699  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.905342ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.400010  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0315 18:41:52.401568  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.208132ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.404006  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.05733ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.404316  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0315 18:41:52.405286  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (769.48µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.407082  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.437876ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.407340  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0315 18:41:52.408398  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (861.17µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.409963  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.142628ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.410239  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0315 18:41:52.411177  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (723.406µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.412782  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.248556ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.412961  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0315 18:41:52.413845  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (746.017µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.415486  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.223991ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.415647  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0315 18:41:52.416591  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (784.132µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.418307  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395451ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.418541  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0315 18:41:52.419601  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (823.808µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.421678  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.616644ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.422104  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0315 18:41:52.423371  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (914.622µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.425842  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.1388ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.426275  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0315 18:41:52.427397  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (908.944µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.429345  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.478171ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.429765  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0315 18:41:52.430676  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (733.201µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.432713  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.59303ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.433154  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0315 18:41:52.434546  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (1.099467ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.437391  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.293049ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.437592  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0315 18:41:52.438669  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (873.018µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.441618  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.066065ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.442221  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0315 18:41:52.443656  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.150152ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.446755  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.263177ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.447496  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0315 18:41:52.448882  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.130428ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.451187  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.856383ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.451776  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0315 18:41:52.458449  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (6.267217ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.461038  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.2363ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.461324  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0315 18:41:52.461494  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.461513  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.461662  121247 wrap.go:47] GET /healthz: (2.035466ms) 500
goroutine 40511 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013322070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013322070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00adbe940, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008924c00, 0xc00a012500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008924c00, 0xc012f80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008924c00, 0xc012f80700)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008924c00, 0xc012f80700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012f48600, 0xc00cd2c3c0, 0x6258940, 0xc008924c00, 0xc012f80700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:52.461679  121247 wrap.go:47] GET /healthz: (2.433183ms) 500
goroutine 40525 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0121013b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0121013b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00accbc40, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0086512f8, 0xc002dd5e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0086512f8, 0xc0123b1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0086512f8, 0xc0123b1b00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0086512f8, 0xc0123b1b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012596a20, 0xc00cd2c3c0, 0x6258940, 0xc0086512f8, 0xc0123b1b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.462439  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (938.146µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32966]
I0315 18:41:52.464781  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.889079ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.464977  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0315 18:41:52.465841  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (739.903µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.467588  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.28691ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.467752  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0315 18:41:52.468867  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (795.189µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.470923  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.722133ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.471143  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0315 18:41:52.472000  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (667.591µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.474128  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.760557ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.474712  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0315 18:41:52.476155  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.227441ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.478854  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.241357ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.479091  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0315 18:41:52.480143  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (818.12µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.482069  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.389889ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.482347  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0315 18:41:52.483303  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (730.679µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.485723  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.927781ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.485936  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0315 18:41:52.487658  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.506275ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.491407  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.388533ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.491618  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0315 18:41:52.492705  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (866.849µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.494310  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.191489ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.494772  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0315 18:41:52.495783  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (870.984µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.497662  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.545796ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.498063  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0315 18:41:52.498963  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (706.461µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.500758  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.385138ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.500995  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0315 18:41:52.502234  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.047903ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.504183  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.381382ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.504460  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0315 18:41:52.505481  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (802.206µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.507161  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.333586ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.507486  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0315 18:41:52.508400  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (743.282µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.510319  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.594696ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.510522  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0315 18:41:52.511507  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (788.823µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.513303  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.355152ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.513526  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0315 18:41:52.514408  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (727.136µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.516256  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42373ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.516456  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0315 18:41:52.517572  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (955.98µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.519183  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.297544ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.519508  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0315 18:41:52.520633  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (920.575µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.522086  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.104683ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.522335  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0315 18:41:52.523274  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (755.937µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.524775  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.177107ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.524995  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0315 18:41:52.525992  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (824.819µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.527543  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.141611ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.527755  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0315 18:41:52.528771  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (857.366µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.530477  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.339643ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.530701  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0315 18:41:52.531775  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (910.023µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.533604  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.432514ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.533862  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0315 18:41:52.549978  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.44221ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.549979  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.550332  121247 wrap.go:47] GET /healthz: (1.212192ms) 500
goroutine 40493 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013937960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013937960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ea78d60, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00ec79390, 0xc0134aa280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00ec79390, 0xc013d05600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00ec79390, 0xc013d05500)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00ec79390, 0xc013d05500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013c97980, 0xc00cd2c3c0, 0x6258940, 0xc00ec79390, 0xc013d05500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:52.559861  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.560126  121247 wrap.go:47] GET /healthz: (1.212206ms) 500
goroutine 40622 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015900380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015900380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e9956a0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008651bb0, 0xc0134aa640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008651bb0, 0xc015b80300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008651bb0, 0xc015b80200)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008651bb0, 0xc015b80200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011d45500, 0xc00cd2c3c0, 0x6258940, 0xc008651bb0, 0xc015b80200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.570573  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.081811ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.570807  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0315 18:41:52.589723  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.180091ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.610933  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.262521ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.611309  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0315 18:41:52.630129  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.455592ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.652249  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.592258ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.653493  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0315 18:41:52.653969  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.654229  121247 wrap.go:47] GET /healthz: (1.233019ms) 500
goroutine 40624 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015900690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015900690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e995c00, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008651c38, 0xc004a112c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008651c38, 0xc015b80800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008651c38, 0xc015b80700)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008651c38, 0xc015b80700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011d45860, 0xc00cd2c3c0, 0x6258940, 0xc008651c38, 0xc015b80700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33122]
I0315 18:41:52.659789  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.659963  121247 wrap.go:47] GET /healthz: (1.049217ms) 500
goroutine 40641 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0160e8770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0160e8770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00eb1ba00, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00f2c8b08, 0xc0134aab40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afc00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00f2c8b08, 0xc0153afc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015411c20, 0xc00cd2c3c0, 0x6258940, 0xc00f2c8b08, 0xc0153afc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.669886  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.336643ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.690825  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.183193ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.691093  121247 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0315 18:41:52.709907  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.30189ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.730928  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.224065ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.731176  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0315 18:41:52.749955  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.750033  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.46148ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33122]
I0315 18:41:52.750111  121247 wrap.go:47] GET /healthz: (961.705µs) 500
goroutine 40603 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016926700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016926700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00eacd760, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc005571ef0, 0xc002ea9900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc005571ef0, 0xc016934900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc005571ef0, 0xc016934800)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc005571ef0, 0xc016934800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0126d8e40, 0xc00cd2c3c0, 0x6258940, 0xc005571ef0, 0xc016934800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:52.759953  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.760143  121247 wrap.go:47] GET /healthz: (1.150309ms) 500
goroutine 40683 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0162ab340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0162ab340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ed46800, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00ec79758, 0xc0134ab040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00ec79758, 0xc01692c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00ec79758, 0xc01692c200)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00ec79758, 0xc01692c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0168d4900, 0xc00cd2c3c0, 0x6258940, 0xc00ec79758, 0xc01692c200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.770621  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.056103ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.770870  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
E0315 18:41:52.781547  121247 event.go:200] Unable to write event: 'Post http://127.0.0.1:42015/api/v1/namespaces/test-scaled-rollout-deployment/events: dial tcp 127.0.0.1:42015: connect: connection refused' (may retry after sleeping)
I0315 18:41:52.790428  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.778863ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.810946  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.345835ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.811224  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0315 18:41:52.830228  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.571416ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.850739  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.850987  121247 wrap.go:47] GET /healthz: (1.733364ms) 500
goroutine 40607 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016926ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016926ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ee52be0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00e47e000, 0xc00c9a0000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00e47e000, 0xc016935e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00e47e000, 0xc016935d00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00e47e000, 0xc016935d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0126d9aa0, 0xc00cd2c3c0, 0x6258940, 0xc00e47e000, 0xc016935d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33122]
I0315 18:41:52.851104  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.496136ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.851539  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0315 18:41:52.860375  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.860559  121247 wrap.go:47] GET /healthz: (1.505156ms) 500
goroutine 40696 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0160e9730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0160e9730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f176f40, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00f2c8d20, 0xc004a11680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00f2c8d20, 0xc016449900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00f2c8d20, 0xc016449800)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00f2c8d20, 0xc016449800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01645f140, 0xc00cd2c3c0, 0x6258940, 0xc00f2c8d20, 0xc016449800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.869667  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.164127ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.890645  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.062831ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.890886  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0315 18:41:52.909894  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.383191ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.931909  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.588938ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.932224  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0315 18:41:52.967464  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.967605  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (7.10133ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:52.967789  121247 wrap.go:47] GET /healthz: (7.170987ms) 500
goroutine 40723 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c5ee380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c5ee380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ec0ba60, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008793600, 0xc008d192c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008793600, 0xc00d5a3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008793600, 0xc00d5a3600)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008793600, 0xc00d5a3600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0161b7c20, 0xc00cd2c3c0, 0x6258940, 0xc008793600, 0xc00d5a3600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:52.967931  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:52.968061  121247 wrap.go:47] GET /healthz: (7.587943ms) 500
goroutine 40655 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0158d4850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0158d4850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ee02f20, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008367fa0, 0xc008d19680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008367fa0, 0xc015897600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008367fa0, 0xc015897500)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008367fa0, 0xc015897500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c110120, 0xc00cd2c3c0, 0x6258940, 0xc008367fa0, 0xc015897500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33122]
I0315 18:41:52.970402  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.723435ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:52.970658  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0315 18:41:52.989583  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.05207ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.010315  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.779683ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.010571  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0315 18:41:53.029946  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.408346ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.050517  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.050684  121247 wrap.go:47] GET /healthz: (1.454112ms) 500
goroutine 40657 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0158d4ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0158d4ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e844680, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008354048, 0xc00c9a0500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008354048, 0xc015897e00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008354048, 0xc015897e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008354048, 0xc015897e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008354048, 0xc015897e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008354048, 0xc015897e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008354048, 0xc015897e00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008354048, 0xc015897e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008354048, 0xc015897e00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008354048, 0xc015897e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008354048, 0xc015897e00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008354048, 0xc015897e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008354048, 0xc015897d00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008354048, 0xc015897d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c1106c0, 0xc00cd2c3c0, 0x6258940, 0xc008354048, 0xc015897d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33164]
I0315 18:41:53.051279  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.550748ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.051508  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0315 18:41:53.059645  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.059803  121247 wrap.go:47] GET /healthz: (858.8µs) 500
goroutine 40742 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003560690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003560690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f2d58a0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00ec79b60, 0xc00b974000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00ec79b60, 0xc00b970400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00ec79b60, 0xc00b970300)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00ec79b60, 0xc00b970300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0168d5980, 0xc00cd2c3c0, 0x6258940, 0xc00ec79b60, 0xc00b970300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.069550  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.051694ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.090497  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.83716ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.090785  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0315 18:41:53.109749  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.192328ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.137984  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.697569ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.138292  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0315 18:41:53.149972  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.395361ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.150372  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.150531  121247 wrap.go:47] GET /healthz: (1.358372ms) 500
goroutine 40700 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0160e9ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0160e9ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f3813a0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00f2c8dd8, 0xc002ce17c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355ea00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00f2c8dd8, 0xc00355ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01645fc20, 0xc00cd2c3c0, 0x6258940, 0xc00f2c8dd8, 0xc00355ea00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33164]
I0315 18:41:53.161494  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.161668  121247 wrap.go:47] GET /healthz: (1.958093ms) 500
goroutine 40717 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000b68620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000b68620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f0ccb20, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc003cf2110, 0xc0134ab680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc003cf2110, 0xc00ba07b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc003cf2110, 0xc00ba07a00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc003cf2110, 0xc00ba07a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0167fdce0, 0xc00cd2c3c0, 0x6258940, 0xc003cf2110, 0xc00ba07a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.171915  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.394983ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.172320  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0315 18:41:53.190288  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.348498ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.211590  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.983323ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.211853  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0315 18:41:53.229764  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.193425ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.250763  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.250955  121247 wrap.go:47] GET /healthz: (1.195493ms) 500
goroutine 40809 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000b69a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000b69a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013e33e80, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc003cf2368, 0xc004a11e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc003cf2368, 0xc007872000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc003cf2368, 0xc00bcb3f00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc003cf2368, 0xc00bcb3f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c43d140, 0xc00cd2c3c0, 0x6258940, 0xc003cf2368, 0xc00bcb3f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:53.251347  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.667705ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.251548  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0315 18:41:53.259898  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.260074  121247 wrap.go:47] GET /healthz: (1.085457ms) 500
goroutine 40811 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000b69c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000b69c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013ea42a0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc003cf2390, 0xc00a012b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc003cf2390, 0xc007872500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc003cf2390, 0xc007872400)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc003cf2390, 0xc007872400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c43d440, 0xc00cd2c3c0, 0x6258940, 0xc003cf2390, 0xc007872400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.269781  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.240604ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.290566  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.014003ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.290841  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0315 18:41:53.309848  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.148358ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.330489  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.902657ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.330745  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0315 18:41:53.350368  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.350472  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.05588ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.350528  121247 wrap.go:47] GET /healthz: (1.128613ms) 500
goroutine 40765 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008a4e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008a4e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013efcac0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008925100, 0xc008d19b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008925100, 0xc00a167b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008925100, 0xc00a167a00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008925100, 0xc00a167a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0087d2660, 0xc00cd2c3c0, 0x6258940, 0xc008925100, 0xc00a167a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:53.364514  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.364689  121247 wrap.go:47] GET /healthz: (5.736051ms) 500
goroutine 40815 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00878c230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00878c230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013ea5000, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc003cf2470, 0xc00642e140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc003cf2470, 0xc007873400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc003cf2470, 0xc007873300)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc003cf2470, 0xc007873300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c43dc80, 0xc00cd2c3c0, 0x6258940, 0xc003cf2470, 0xc007873300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.370355  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.832777ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.370611  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0315 18:41:53.389693  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.171796ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.410507  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.856149ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.410816  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0315 18:41:53.429836  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.204438ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.452804  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.452985  121247 wrap.go:47] GET /healthz: (3.864836ms) 500
goroutine 40818 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003561e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003561e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0140bf020, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00ec79ed0, 0xc00642e500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00ec79ed0, 0xc006432f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00ec79ed0, 0xc006432e00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00ec79ed0, 0xc006432e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0077d9c20, 0xc00cd2c3c0, 0x6258940, 0xc00ec79ed0, 0xc006432e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33164]
I0315 18:41:53.453243  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.641208ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.453497  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0315 18:41:53.459840  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.460010  121247 wrap.go:47] GET /healthz: (1.072791ms) 500
goroutine 40730 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c5ef180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c5ef180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013fff880, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008793748, 0xc00642e8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008793748, 0xc00261ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008793748, 0xc00261ec00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008793748, 0xc00261ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00880b1a0, 0xc00cd2c3c0, 0x6258940, 0xc008793748, 0xc00261ec00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.469546  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.074509ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.490547  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.912989ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.490823  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0315 18:41:53.509763  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.192262ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.553982  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.554175  121247 wrap.go:47] GET /healthz: (4.990428ms) 500
goroutine 40794 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dac8700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dac8700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0140ec5c0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00c456e00, 0xc013cba3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00c456e00, 0xc0050c7200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00c456e00, 0xc0050c7100)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00c456e00, 0xc0050c7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e50a840, 0xc00cd2c3c0, 0x6258940, 0xc00c456e00, 0xc0050c7100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33164]
I0315 18:41:53.554315  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (25.635973ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.554555  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0315 18:41:53.555960  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.195381ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.559851  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.560010  121247 wrap.go:47] GET /healthz: (1.081927ms) 500
goroutine 40838 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00baa0f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00baa0f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014170400, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00f2c9020, 0xc002ce1cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03000)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00f2c9020, 0xc00ed03000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009db0f60, 0xc00cd2c3c0, 0x6258940, 0xc00f2c9020, 0xc00ed03000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.575348  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.815902ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.575644  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0315 18:41:53.589739  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.219796ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.610387  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.790559ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.610660  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0315 18:41:53.629703  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.16138ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.650733  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.136933ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.650783  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.651023  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0315 18:41:53.651045  121247 wrap.go:47] GET /healthz: (1.857627ms) 500
goroutine 40735 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c5ef7a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c5ef7a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0141ccb40, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0087937f8, 0xc00cd3e140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0087937f8, 0xc00261f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0087937f8, 0xc00261f800)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0087937f8, 0xc00261f800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00880bf80, 0xc00cd2c3c0, 0x6258940, 0xc0087937f8, 0xc00261f800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:53.660076  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.660259  121247 wrap.go:47] GET /healthz: (1.275703ms) 500
goroutine 40737 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c5efab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c5efab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0141ccea0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008793840, 0xc00cd3e500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008793840, 0xc00261fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008793840, 0xc00261fd00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008793840, 0xc00261fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b718300, 0xc00cd2c3c0, 0x6258940, 0xc008793840, 0xc00261fd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.675019  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (6.50732ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.692163  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.618488ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.692448  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0315 18:41:53.712615  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (4.091523ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.730538  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.94532ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.730782  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0315 18:41:53.750077  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.456535ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:53.751211  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.751436  121247 wrap.go:47] GET /healthz: (1.896914ms) 500
goroutine 40840 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00baa16c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00baa16c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014171500, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00f2c90f8, 0xc008c3cdc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03a00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00f2c90f8, 0xc00ed03a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009db1860, 0xc00cd2c3c0, 0x6258940, 0xc00f2c90f8, 0xc00ed03a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33164]
I0315 18:41:53.759890  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.760054  121247 wrap.go:47] GET /healthz: (1.189821ms) 500
goroutine 40886 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c5eff10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c5eff10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0141cdca0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008793950, 0xc00cd3e8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008793950, 0xc00cc13500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008793950, 0xc00cc13400)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008793950, 0xc00cc13400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b718fc0, 0xc00cd2c3c0, 0x6258940, 0xc008793950, 0xc00cc13400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.770817  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.363276ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.771056  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0315 18:41:53.798347  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (9.819241ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.811111  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.605353ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.811484  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0315 18:41:53.829488  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.039816ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.855824  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.144832ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.855997  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.856153  121247 wrap.go:47] GET /healthz: (3.795381ms) 500
goroutine 40904 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b9e87e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b9e87e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0143b7860, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00e47e750, 0xc00b974a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00e47e750, 0xc00d079c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00e47e750, 0xc00d079b00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00e47e750, 0xc00d079b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00785e9c0, 0xc00cd2c3c0, 0x6258940, 0xc00e47e750, 0xc00d079b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:53.856524  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0315 18:41:53.860299  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.860462  121247 wrap.go:47] GET /healthz: (1.54913ms) 500
goroutine 40914 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b8f4380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b8f4380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014399300, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00f2c92a0, 0xc00b974f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9b00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00f2c92a0, 0xc0103e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0103eaa80, 0xc00cd2c3c0, 0x6258940, 0xc00f2c92a0, 0xc0103e9b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.869878  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.365794ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.900927  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (12.330722ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.901220  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0315 18:41:53.916346  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (6.40049ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.930566  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.010566ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.930813  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0315 18:41:53.949683  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.119583ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.964117  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.964321  121247 wrap.go:47] GET /healthz: (15.243181ms) 500
goroutine 40870 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dac9960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dac9960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01426d0c0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00c457028, 0xc00a013040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00c457028, 0xc00afdd300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00c457028, 0xc00afdd200)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00c457028, 0xc00afdd200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d7f4780, 0xc00cd2c3c0, 0x6258940, 0xc00c457028, 0xc00afdd200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:53.964487  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:53.964631  121247 wrap.go:47] GET /healthz: (5.711227ms) 500
goroutine 40858 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d092ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d092ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0143bd460, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0089257b0, 0xc00a013400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0089257b0, 0xc00e415900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0089257b0, 0xc00e415800)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0089257b0, 0xc00e415800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d094720, 0xc00cd2c3c0, 0x6258940, 0xc0089257b0, 0xc00e415800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.970389  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.87631ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:53.970599  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0315 18:41:53.991443  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (2.917528ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.010420  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.930353ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.010716  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0315 18:41:54.040539  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (11.9717ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.049948  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.050129  121247 wrap.go:47] GET /healthz: (1.017845ms) 500
goroutine 40892 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103e71f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103e71f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0144da300, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008793af8, 0xc00a0137c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008793af8, 0xc00ab8aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008793af8, 0xc00ab8a900)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008793af8, 0xc00ab8a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b719f20, 0xc00cd2c3c0, 0x6258940, 0xc008793af8, 0xc00ab8a900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:54.050945  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.408235ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.051184  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0315 18:41:54.060695  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.060878  121247 wrap.go:47] GET /healthz: (1.898411ms) 500
goroutine 40862 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d0935e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d0935e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0144849e0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc0089258c8, 0xc00b975400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc0089258c8, 0xc00266ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc0089258c8, 0xc00266eb00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc0089258c8, 0xc00266eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d0954a0, 0xc00cd2c3c0, 0x6258940, 0xc0089258c8, 0xc00266eb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.070111  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.493522ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.091492  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.879091ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.092020  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0315 18:41:54.109965  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.281292ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.131110  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.48897ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.131422  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0315 18:41:54.155925  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.156068  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (7.491124ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.156169  121247 wrap.go:47] GET /healthz: (7.018481ms) 500
goroutine 40864 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d0939d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d0939d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014484fe0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008925960, 0xc0134abb80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008925960, 0xc00266f600)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008925960, 0xc00266f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008925960, 0xc00266f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008925960, 0xc00266f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008925960, 0xc00266f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008925960, 0xc00266f600)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008925960, 0xc00266f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008925960, 0xc00266f600)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008925960, 0xc00266f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008925960, 0xc00266f600)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008925960, 0xc00266f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008925960, 0xc00266f500)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008925960, 0xc00266f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d095920, 0xc00cd2c3c0, 0x6258940, 0xc008925960, 0xc00266f500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:54.159692  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.159849  121247 wrap.go:47] GET /healthz: (990.819µs) 500
goroutine 40878 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00266ccb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00266ccb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0144d7b40, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00c4572c8, 0xc00a013cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3000)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00c4572c8, 0xc0024e3000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d7f5d40, 0xc00cd2c3c0, 0x6258940, 0xc00c4572c8, 0xc0024e3000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.170721  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.108892ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.170971  121247 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0315 18:41:54.216119  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (27.59268ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.218150  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.340793ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.220685  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.961212ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.220882  121247 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0315 18:41:54.229494  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.001783ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.231061  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.167608ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.250877  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.251289  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.733287ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.251568  121247 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0315 18:41:54.251119  121247 wrap.go:47] GET /healthz: (1.383354ms) 500
goroutine 40966 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00878d0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00878d0a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0140cbb60, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc003cf2650, 0xc006958000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc003cf2650, 0xc00261ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc003cf2650, 0xc00261ab00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc003cf2650, 0xc00261ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0041f1140, 0xc00cd2c3c0, 0x6258940, 0xc003cf2650, 0xc00261ab00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33164]
I0315 18:41:54.259711  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.259892  121247 wrap.go:47] GET /healthz: (952.282µs) 500
goroutine 40948 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005bc8070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005bc8070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014485c80, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008925a38, 0xc00c9a0b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008925a38, 0xc005bca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008925a38, 0xc005bca700)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008925a38, 0xc005bca700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005bcc1e0, 0xc00cd2c3c0, 0x6258940, 0xc008925a38, 0xc005bca700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.280130  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (11.638939ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.282121  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.41196ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.290221  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.732446ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.290450  121247 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0315 18:41:54.309705  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.240778ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.311166  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.101037ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.330040  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.626154ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.330319  121247 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0315 18:41:54.349482  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (938.132µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.349805  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.349959  121247 wrap.go:47] GET /healthz: (895.656µs) 500
goroutine 40971 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00878db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00878db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0147672c0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc003cf2770, 0xc0069583c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc003cf2770, 0xc00261bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc003cf2770, 0xc00261ba00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc003cf2770, 0xc00261ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0041f1b00, 0xc00cd2c3c0, 0x6258940, 0xc003cf2770, 0xc00261ba00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33126]
I0315 18:41:54.364453  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.364585  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (14.717767ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33164]
I0315 18:41:54.364616  121247 wrap.go:47] GET /healthz: (5.731913ms) 500
goroutine 40880 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00266d3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00266d3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01453e900, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00c4573e0, 0xc00b975900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3c00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00c4573e0, 0xc0024e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0049fa240, 0xc00cd2c3c0, 0x6258940, 0xc00c4573e0, 0xc0024e3c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.370370  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.923521ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.370607  121247 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0315 18:41:54.389678  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.145978ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.391427  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.303063ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.410099  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.576416ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.410564  121247 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0315 18:41:54.429751  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.206842ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.446534  121247 wrap.go:47] GET /api/v1/namespaces/kube-public: (16.318346ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.450075  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.450259  121247 wrap.go:47] GET /healthz: (1.180046ms) 500
goroutine 40952 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005bc9b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005bc9b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014875d80, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008925cf0, 0xc00c9a0f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008925cf0, 0xc0104f0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008925cf0, 0xc0104f0a00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008925cf0, 0xc0104f0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005bccea0, 0xc00cd2c3c0, 0x6258940, 0xc008925cf0, 0xc0104f0a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33164]
I0315 18:41:54.450813  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.3699ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.451023  121247 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0315 18:41:54.459822  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.459995  121247 wrap.go:47] GET /healthz: (1.066203ms) 500
goroutine 40954 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005bc9ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005bc9ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014875fa0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008925d00, 0xc00b975cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008925d00, 0xc0104f1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008925d00, 0xc0104f0f00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008925d00, 0xc0104f0f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005bcd140, 0xc00cd2c3c0, 0x6258940, 0xc008925d00, 0xc0104f0f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.469715  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.199943ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.471419  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.228083ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.490761  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.206607ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.490997  121247 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0315 18:41:54.509858  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.20942ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.511794  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.421173ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.533984  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.413374ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.534343  121247 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0315 18:41:54.549800  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.253301ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.549836  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.549991  121247 wrap.go:47] GET /healthz: (900.863µs) 500
goroutine 40956 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011f9e070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011f9e070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01495e8c0, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc008925db0, 0xc006958780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc008925db0, 0xc0104f1e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc008925db0, 0xc0104f1d00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc008925db0, 0xc0104f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005bcd620, 0xc00cd2c3c0, 0x6258940, 0xc008925db0, 0xc0104f1d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33164]
I0315 18:41:54.551855  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.724986ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.559679  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.559868  121247 wrap.go:47] GET /healthz: (958.182µs) 500
goroutine 41000 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011e6c4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011e6c4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0149d9000, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc00c457850, 0xc006958c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc00c457850, 0xc011ff4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc00c457850, 0xc011ff4400)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc00c457850, 0xc011ff4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0049fb6e0, 0xc00cd2c3c0, 0x6258940, 0xc00c457850, 0xc011ff4400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.594413  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (25.938116ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.594722  121247 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0315 18:41:54.598674  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (3.751754ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.600711  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.691077ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.610263  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.707562ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.610476  121247 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0315 18:41:54.629698  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.144635ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.631489  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.299803ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.650513  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.650686  121247 wrap.go:47] GET /healthz: (1.520046ms) 500
goroutine 41030 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0104396c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0104396c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014a42f80, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc003cf2a68, 0xc006959180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e100)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc003cf2a68, 0xc012e7e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011f7ac60, 0xc00cd2c3c0, 0x6258940, 0xc003cf2a68, 0xc012e7e100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33164]
I0315 18:41:54.651768  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.185628ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.652001  121247 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0315 18:41:54.659763  121247 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0315 18:41:54.659945  121247 wrap.go:47] GET /healthz: (996.457µs) 500
goroutine 41032 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010439b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010439b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014a43480, 0x1f4)
net/http.Error(0x7f9104f3c718, 0xc003cf2b08, 0xc012658280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
net/http.HandlerFunc.ServeHTTP(0xc010fdeb80, 0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc013b6fd40, 0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007199260, 0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x415fe2e, 0xe, 0xc000a1b950, 0xc007199260, 0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
net/http.HandlerFunc.ServeHTTP(0xc00c372200, 0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
net/http.HandlerFunc.ServeHTTP(0xc008ecbb00, 0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
net/http.HandlerFunc.ServeHTTP(0xc00c372280, 0x7f9104f3c718, 0xc003cf2b08, 0xc012e7eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f9104f3c718, 0xc003cf2b08, 0xc012e7ea00)
net/http.HandlerFunc.ServeHTTP(0xc00a7a70e0, 0x7f9104f3c718, 0xc003cf2b08, 0xc012e7ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011f7b140, 0xc00cd2c3c0, 0x6258940, 0xc003cf2b08, 0xc012e7ea00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.669466  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (991.265µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.671621  121247 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.725241ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.690597  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.000266ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.690834  121247 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0315 18:41:54.720796  121247 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (12.20746ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.724598  121247 wrap.go:47] GET /api/v1/namespaces/kube-public: (3.327926ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.730167  121247 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.660255ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.730431  121247 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0315 18:41:54.750297  121247 wrap.go:47] GET /healthz: (1.06154ms) 200 [Go-http-client/1.1 127.0.0.1:33126]
E0315 18:41:54.751962  121247 prometheus.go:138] failed to register depth metric deployment: duplicate metrics collector registration attempted
E0315 18:41:54.751992  121247 prometheus.go:150] failed to register adds metric deployment: duplicate metrics collector registration attempted
E0315 18:41:54.752033  121247 prometheus.go:162] failed to register latency metric deployment: duplicate metrics collector registration attempted
E0315 18:41:54.752078  121247 prometheus.go:174] failed to register work_duration metric deployment: duplicate metrics collector registration attempted
E0315 18:41:54.752104  121247 prometheus.go:189] failed to register unfinished_work_seconds metric deployment: duplicate metrics collector registration attempted
E0315 18:41:54.752119  121247 prometheus.go:202] failed to register longest_running_processor_microseconds metric deployment: duplicate metrics collector registration attempted
E0315 18:41:54.752171  121247 prometheus.go:214] failed to register retries metric deployment: duplicate metrics collector registration attempted
W0315 18:41:54.752222  121247 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 18:41:54.752257  121247 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0315 18:41:54.752291  121247 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
E0315 18:41:54.752911  121247 prometheus.go:138] failed to register depth metric replicaset: duplicate metrics collector registration attempted
E0315 18:41:54.752941  121247 prometheus.go:150] failed to register adds metric replicaset: duplicate metrics collector registration attempted
E0315 18:41:54.752981  121247 prometheus.go:162] failed to register latency metric replicaset: duplicate metrics collector registration attempted
E0315 18:41:54.753025  121247 prometheus.go:174] failed to register work_duration metric replicaset: duplicate metrics collector registration attempted
E0315 18:41:54.753051  121247 prometheus.go:189] failed to register unfinished_work_seconds metric replicaset: duplicate metrics collector registration attempted
E0315 18:41:54.753073  121247 prometheus.go:202] failed to register longest_running_processor_microseconds metric replicaset: duplicate metrics collector registration attempted
E0315 18:41:54.753117  121247 prometheus.go:214] failed to register retries metric replicaset: duplicate metrics collector registration attempted
I0315 18:41:54.755885  121247 wrap.go:47] POST /apis/apps/v1/namespaces/test-deployment-available-condition/deployments: (2.414543ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.756365  121247 reflector.go:123] Starting reflector *v1.Deployment (12h0m0s) from k8s.io/client-go/informers/factory.go:133
I0315 18:41:54.756389  121247 reflector.go:161] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:133
I0315 18:41:54.756478  121247 reflector.go:123] Starting reflector *v1.ReplicaSet (12h0m0s) from k8s.io/client-go/informers/factory.go:133
I0315 18:41:54.756495  121247 reflector.go:161] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0315 18:41:54.756526  121247 replica_set.go:182] Starting replicaset controller
I0315 18:41:54.756539  121247 controller_utils.go:1027] Waiting for caches to sync for ReplicaSet controller
I0315 18:41:54.756560  121247 deployment_controller.go:152] Starting deployment controller
I0315 18:41:54.756566  121247 controller_utils.go:1027] Waiting for caches to sync for deployment controller
I0315 18:41:54.756480  121247 reflector.go:123] Starting reflector *v1.Pod (12h0m0s) from k8s.io/client-go/informers/factory.go:133
I0315 18:41:54.756594  121247 reflector.go:161] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0315 18:41:54.757143  121247 wrap.go:47] GET /apis/apps/v1/deployments?limit=500&resourceVersion=0: (530.601µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:33164]
I0315 18:41:54.757327  121247 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (436.042µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:33502]
I0315 18:41:54.757475  121247 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (531.147µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:33504]
I0315 18:41:54.757617  121247 deployment_controller.go:168] Adding deployment deployment
I0315 18:41:54.757848  121247 get.go:251] Starting watch for /apis/apps/v1/deployments, rv=19245 labels= fields= timeout=9m27s
I0315 18:41:54.757857  121247 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=18889 labels= fields= timeout=9m43s
I0315 18:41:54.758144  121247 get.go:251] Starting watch for /api/v1/pods, rv=18886 labels= fields= timeout=7m7s
I0315 18:41:54.758898  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.553452ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.759717  121247 wrap.go:47] GET /healthz: (738.238µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0315 18:41:54.760786  121247 wrap.go:47] GET /api/v1/namespaces/default: (787.607µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0315 18:41:54.762713  121247 wrap.go:47] POST /api/v1/namespaces: (1.522968ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0315 18:41:54.763948  121247 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (861.959µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0315 18:41:54.767406  121247 wrap.go:47] POST /api/v1/namespaces/default/services: (3.056258ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0315 18:41:54.783792  121247 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (15.996374ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0315 18:41:54.786328  121247 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.912456ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0315 18:41:54.856705  121247 shared_informer.go:123] caches populated
I0315 18:41:54.856774  121247 controller_utils.go:1034] Caches are synced for ReplicaSet controller
I0315 18:41:54.856710  121247 shared_informer.go:123] caches populated
I0315 18:41:54.856821  121247 controller_utils.go:1034] Caches are synced for deployment controller
I0315 18:41:54.856880  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:54.856874342 +0000 UTC m=+188.847651091)
I0315 18:41:54.857255  121247 deployment_util.go:259] Updating replica set "deployment-74fb8955d7" revision to 1
I0315 18:41:54.860610  121247 wrap.go:47] POST /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets: (2.910643ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33506]
I0315 18:41:54.860962  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.443161ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33126]
I0315 18:41:54.860983  121247 controller_utils.go:201] Controller test-deployment-available-condition/deployment-74fb8955d7 either never recorded expectations, or the ttl expired.
I0315 18:41:54.861020  121247 controller_utils.go:218] Setting expectations &controller.ControlleeExpectations{add:10, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:54.861086  121247 replica_set.go:477] Too few replicas for ReplicaSet test-deployment-available-condition/deployment-74fb8955d7, need 10, creating 10
I0315 18:41:54.861080  121247 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"test-deployment-available-condition", Name:"deployment", UID:"01610eef-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19245", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-74fb8955d7 to 10
I0315 18:41:54.861132  121247 deployment_controller.go:214] ReplicaSet deployment-74fb8955d7 added.
I0315 18:41:54.864052  121247 replica_set.go:275] Pod deployment-74fb8955d7-2td6s created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-2td6s", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-2td6s", UID:"0171914d-4752-11e9-8860-0242ac110002", ResourceVersion:"19255", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272114, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"74fb8955d7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00bf476c7), BlockOwnerDeletion:(*bool)(0xc00bf476c8)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00bf47750), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc010e09680), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00bf47758)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:54.864262  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:9, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:54.864465  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.152999ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33506]
I0315 18:41:54.864728  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:54.860871474 +0000 UTC m=+188.851648227 - now: 2019-03-15 18:41:54.864722068 +0000 UTC m=+188.855498808]
I0315 18:41:54.864949  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:54.865225  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (3.501277ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33508]
I0315 18:41:54.865480  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-2td6s
I0315 18:41:54.865676  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (4.364289ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33126]
I0315 18:41:54.865902  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-2td6s
I0315 18:41:54.866757  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (1.774696ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33506]
I0315 18:41:54.866924  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (10.044316ms)
I0315 18:41:54.866951  121247 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on deployments.apps "deployment": the object has been modified; please apply your changes to the latest version and try again
I0315 18:41:54.866989  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:54.866986096 +0000 UTC m=+188.857762853)
I0315 18:41:54.867418  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:54 +0000 UTC - now: 2019-03-15 18:41:54.86741195 +0000 UTC m=+188.858188700]
I0315 18:41:54.867601  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (1.811546ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33508]
I0315 18:41:54.867698  121247 replica_set.go:275] Pod deployment-74fb8955d7-nflf7 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-nflf7", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-nflf7", UID:"01722fb8-4752-11e9-8860-0242ac110002", ResourceVersion:"19259", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272114, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"74fb8955d7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00be4dfe7), BlockOwnerDeletion:(*bool)(0xc00be4dfe8)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00c16a0d0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc010da35c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00c16a0d8)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:54.867809  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:8, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:54.867926  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-nflf7
I0315 18:41:54.867978  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-nflf7
I0315 18:41:54.868156  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.044109ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33510]
I0315 18:41:54.869574  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-rt24x
I0315 18:41:54.869718  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (3.717815ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33126]
I0315 18:41:54.869621  121247 replica_set.go:275] Pod deployment-74fb8955d7-rt24x created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-rt24x", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-rt24x", UID:"01723a2c-4752-11e9-8860-0242ac110002", ResourceVersion:"19260", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272114, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"pod-template-hash":"74fb8955d7", "name":"test"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00c16a807), BlockOwnerDeletion:(*bool)(0xc00c16a808)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00c16a8e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc010da3740), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00c16a8e8)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:54.869807  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:7, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:54.869809  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-rt24x
I0315 18:41:54.871921  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.044282ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33508]
I0315 18:41:54.872342  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.179036ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33512]
I0315 18:41:54.872463  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.351534ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33126]
I0315 18:41:54.872583  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-b7522
I0315 18:41:54.872618  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-b7522
I0315 18:41:54.872782  121247 replica_set.go:275] Pod deployment-74fb8955d7-5w9vh created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-5w9vh", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-5w9vh", UID:"0172d8f9-4752-11e9-8860-0242ac110002", ResourceVersion:"19262", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272114, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"74fb8955d7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00c10dad7), BlockOwnerDeletion:(*bool)(0xc00c10dad8)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00c10dd00), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc010ee7620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00c10dd08)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:54.872906  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:6, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:54.872927  121247 replica_set.go:275] Pod deployment-74fb8955d7-b7522 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-b7522", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-b7522", UID:"0172d0b4-4752-11e9-8860-0242ac110002", ResourceVersion:"19265", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272114, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"74fb8955d7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00c10df77), BlockOwnerDeletion:(*bool)(0xc00c10df78)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00c186000), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc010ee7680), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00c186008)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:54.873008  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:5, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:54.873441  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-5w9vh
I0315 18:41:54.873506  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-5w9vh
I0315 18:41:54.873797  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.348935ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33516]
I0315 18:41:54.874008  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-vv5jt
I0315 18:41:54.874049  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-vv5jt
I0315 18:41:54.874119  121247 replica_set.go:275] Pod deployment-74fb8955d7-vv5jt created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-vv5jt", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-vv5jt", UID:"017309a9-4752-11e9-8860-0242ac110002", ResourceVersion:"19266", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272114, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"74fb8955d7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00c2ca077), BlockOwnerDeletion:(*bool)(0xc00c2ca078)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00c2ca100), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc010e39da0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00c2ca108)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:54.874232  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:4, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:54.874248  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (1.34747ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33126]
I0315 18:41:54.874879  121247 replica_set.go:275] Pod deployment-74fb8955d7-hgf95 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-hgf95", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-hgf95", UID:"01730daa-4752-11e9-8860-0242ac110002", ResourceVersion:"19268", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272114, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"74fb8955d7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00b8e5a67), BlockOwnerDeletion:(*bool)(0xc00b8e5a68)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00b8e5af0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc010fbe360), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00b8e5af8)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:54.874999  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:3, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:54.875030  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (3.553233ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33514]
I0315 18:41:54.875316  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-hgf95
I0315 18:41:54.875465  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-hgf95
I0315 18:41:54.876818  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (5.486794ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33506]
I0315 18:41:54.876938  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:54.877105  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (10.112357ms)
I0315 18:41:54.877160  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:54.87715631 +0000 UTC m=+188.867933059)
I0315 18:41:54.877558  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:54 +0000 UTC - now: 2019-03-15 18:41:54.877550478 +0000 UTC m=+188.868327235]
I0315 18:41:54.877607  121247 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0315 18:41:54.877629  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (468.662µs)
I0315 18:41:54.961456  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.799117ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33516]
I0315 18:41:55.061408  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.791205ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33516]
I0315 18:41:55.061816  121247 request.go:530] Throttling request took 187.155741ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/events
I0315 18:41:55.064318  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.341186ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33516]
I0315 18:41:55.161727  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.088156ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33516]
I0315 18:41:55.261426  121247 request.go:530] Throttling request took 385.93378ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/pods
I0315 18:41:55.264252  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.567427ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33508]
I0315 18:41:55.264295  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (4.6506ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33516]
I0315 18:41:55.264698  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-4zdt2
I0315 18:41:55.264744  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-4zdt2
I0315 18:41:55.265008  121247 replica_set.go:275] Pod deployment-74fb8955d7-4zdt2 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-4zdt2", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-4zdt2", UID:"01aea0dc-4752-11e9-8860-0242ac110002", ResourceVersion:"19289", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272115, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"pod-template-hash":"74fb8955d7", "name":"test"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00c1862d7), BlockOwnerDeletion:(*bool)(0xc00c1862d8)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00c186360), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc010ee7800), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00c186368)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:55.265183  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:2, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.361594  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.959674ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.461391  121247 request.go:530] Throttling request took 585.886216ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/pods
I0315 18:41:55.461641  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.001858ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.463740  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.107378ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33516]
I0315 18:41:55.463969  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-kst7t
I0315 18:41:55.464023  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-kst7t
I0315 18:41:55.463942  121247 replica_set.go:275] Pod deployment-74fb8955d7-kst7t created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-kst7t", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-kst7t", UID:"01cd2352-4752-11e9-8860-0242ac110002", ResourceVersion:"19301", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272115, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"74fb8955d7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00c663b27), BlockOwnerDeletion:(*bool)(0xc00c663b28)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00c663bb0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc01119a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00c663bb8)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:55.464155  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.574614  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (14.989402ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33516]
I0315 18:41:55.661383  121247 request.go:530] Throttling request took 785.729284ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/pods
I0315 18:41:55.661675  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.014138ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33516]
I0315 18:41:55.663883  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.242829ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33508]
I0315 18:41:55.664116  121247 controller_utils.go:587] Controller deployment-74fb8955d7 created pod deployment-74fb8955d7-vnbvp
I0315 18:41:55.664034  121247 replica_set.go:275] Pod deployment-74fb8955d7-vnbvp created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-74fb8955d7-vnbvp", GenerateName:"deployment-74fb8955d7-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-vnbvp", UID:"01ebaa66-4752-11e9-8860-0242ac110002", ResourceVersion:"19312", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688272115, loc:(*time.Location)(0x8e10020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"74fb8955d7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", Controller:(*bool)(0xc00cad8517), BlockOwnerDeletion:(*bool)(0xc00cad8518)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00cad85a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0110f47e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00cad85a8)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort"}}.
I0315 18:41:55.664226  121247 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.664234  121247 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-74fb8955d7", UID:"0170fb85-4752-11e9-8860-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"19254", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-74fb8955d7-vnbvp
I0315 18:41:55.664190  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 0->0 (need 10), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0315 18:41:55.666742  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (2.16712ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33508]
I0315 18:41:55.667034  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (806.058076ms)
I0315 18:41:55.667085  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:55.667086  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.667134  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:55.667111599 +0000 UTC m=+189.657888358)
I0315 18:41:55.667221  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 0->10 (need 10), fullyLabeledReplicas 0->10, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0315 18:41:55.667611  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:54 +0000 UTC - now: 2019-03-15 18:41:55.667605205 +0000 UTC m=+189.658381970]
I0315 18:41:55.667662  121247 progress.go:193] Queueing up deployment "deployment" for a progress check after 7198s
I0315 18:41:55.667684  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (569.418µs)
I0315 18:41:55.669596  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (2.103461ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33508]
I0315 18:41:55.669801  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:55.669844  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:55.669824637 +0000 UTC m=+189.660601399)
I0315 18:41:55.669855  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (2.772254ms)
I0315 18:41:55.669893  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.669974  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (86.124µs)
I0315 18:41:55.672714  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.203307ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33508]
I0315 18:41:55.672944  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (3.11283ms)
I0315 18:41:55.673057  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:55.673084  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:55.673080413 +0000 UTC m=+189.663857167)
I0315 18:41:55.673516  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:55 +0000 UTC - now: 2019-03-15 18:41:55.673510082 +0000 UTC m=+189.664286839]
I0315 18:41:55.673560  121247 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0315 18:41:55.673585  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (502.971µs)
I0315 18:41:55.761741  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.080998ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.764490  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.070518ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.766500  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.572355ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.774140  121247 wrap.go:47] GET /api/v1/namespaces/test-deployment-available-condition/pods?labelSelector=name%3Dtest: (2.719004ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.791961  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (17.130369ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.861420  121247 request.go:530] Throttling request took 796.762161ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/events
I0315 18:41:55.867511  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (5.774348ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33508]
I0315 18:41:55.953496  121247 request.go:530] Throttling request took 161.003842ms, request: GET:http://127.0.0.1:45991/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets?labelSelector=name%3Dtest
I0315 18:41:55.978191  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets?labelSelector=name%3Dtest: (24.378473ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.981614  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-2td6s/status: (2.509944ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.981855  121247 replica_set.go:338] Pod deployment-74fb8955d7-2td6s updated, objectMeta {Name:deployment-74fb8955d7-2td6s GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-2td6s UID:0171914d-4752-11e9-8860-0242ac110002 ResourceVersion:19255 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00bf476c7 BlockOwnerDeletion:0xc00bf476c8}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-2td6s GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-2td6s UID:0171914d-4752-11e9-8860-0242ac110002 ResourceVersion:19325 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[pod-template-hash:74fb8955d7 name:test] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc01041aa07 BlockOwnerDeletion:0xc01041aa08}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:55.981954  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:55.982035  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.982184  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 0->1, availableReplicas 0->0, sequence No: 1->1
I0315 18:41:55.984362  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-4zdt2/status: (2.247472ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.985434  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (2.95338ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33516]
I0315 18:41:55.985681  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (3.667436ms)
I0315 18:41:55.984739  121247 replica_set.go:338] Pod deployment-74fb8955d7-4zdt2 updated, objectMeta {Name:deployment-74fb8955d7-4zdt2 GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-4zdt2 UID:01aea0dc-4752-11e9-8860-0242ac110002 ResourceVersion:19289 Generation:0 CreationTimestamp:2019-03-15 18:41:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00c1862d7 BlockOwnerDeletion:0xc00c1862d8}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-4zdt2 GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-4zdt2 UID:01aea0dc-4752-11e9-8860-0242ac110002 ResourceVersion:19326 Generation:0 CreationTimestamp:2019-03-15 18:41:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00cadf9d7 BlockOwnerDeletion:0xc00cadf9d8}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:55.985811  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:55.985850  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:55.985785  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.985883  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:55.985865032 +0000 UTC m=+189.976641879)
I0315 18:41:55.985971  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 1->2, availableReplicas 0->0, sequence No: 1->1
I0315 18:41:55.986407  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-5w9vh/status: (1.61684ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33508]
I0315 18:41:55.986867  121247 replica_set.go:338] Pod deployment-74fb8955d7-5w9vh updated, objectMeta {Name:deployment-74fb8955d7-5w9vh GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-5w9vh UID:0172d8f9-4752-11e9-8860-0242ac110002 ResourceVersion:19262 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00c10dad7 BlockOwnerDeletion:0xc00c10dad8}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-5w9vh GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-5w9vh UID:0172d8f9-4752-11e9-8860-0242ac110002 ResourceVersion:19328 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc01052a347 BlockOwnerDeletion:0xc01052a348}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:55.986933  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:55.988943  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.136949ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33508]
I0315 18:41:55.988957  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-b7522/status: (2.123329ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0315 18:41:55.989168  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:55.989230  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (3.359402ms)
I0315 18:41:55.989259  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:55.989256844 +0000 UTC m=+189.980033591)
I0315 18:41:55.989604  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:55 +0000 UTC - now: 2019-03-15 18:41:55.989598042 +0000 UTC m=+189.980374797]
I0315 18:41:55.989634  121247 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0315 18:41:55.989646  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (387.273µs)
I0315 18:41:55.989945  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (3.739329ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33516]
I0315 18:41:55.990077  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:55.990113  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:55.99010957 +0000 UTC m=+189.980886329)
I0315 18:41:55.990141  121247 replica_set.go:338] Pod deployment-74fb8955d7-b7522 updated, objectMeta {Name:deployment-74fb8955d7-b7522 GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-b7522 UID:0172d0b4-4752-11e9-8860-0242ac110002 ResourceVersion:19265 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00c10df77 BlockOwnerDeletion:0xc00c10df78}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-b7522 GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-b7522 UID:0172d0b4-4752-11e9-8860-0242ac110002 ResourceVersion:19331 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00bdf9c77 BlockOwnerDeletion:0xc00bdf9c78}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:55.990259  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:55.990951  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (5.184791ms)
I0315 18:41:55.991017  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.991123  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 2->4, availableReplicas 0->0, sequence No: 1->1
I0315 18:41:55.991221  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-hgf95/status: (1.827278ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0315 18:41:55.991347  121247 replica_set.go:338] Pod deployment-74fb8955d7-hgf95 updated, objectMeta {Name:deployment-74fb8955d7-hgf95 GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-hgf95 UID:01730daa-4752-11e9-8860-0242ac110002 ResourceVersion:19268 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00b8e5a67 BlockOwnerDeletion:0xc00b8e5a68}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-hgf95 GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-hgf95 UID:01730daa-4752-11e9-8860-0242ac110002 ResourceVersion:19332 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[pod-template-hash:74fb8955d7 name:test] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc0108542a7 BlockOwnerDeletion:0xc0108542a8}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:55.991443  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:55.994405  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (2.990415ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33558]
I0315 18:41:55.994420  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.700446ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33560]
I0315 18:41:55.994536  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:55.994506  121247 replica_set.go:338] Pod deployment-74fb8955d7-kst7t updated, objectMeta {Name:deployment-74fb8955d7-kst7t GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-kst7t UID:01cd2352-4752-11e9-8860-0242ac110002 ResourceVersion:19301 Generation:0 CreationTimestamp:2019-03-15 18:41:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00c663b27 BlockOwnerDeletion:0xc00c663b28}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-kst7t GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-kst7t UID:01cd2352-4752-11e9-8860-0242ac110002 ResourceVersion:19334 Generation:0 CreationTimestamp:2019-03-15 18:41:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc01067c7f7 BlockOwnerDeletion:0xc01067c7f8}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:55.994573  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:55.994677  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (4.563674ms)
I0315 18:41:55.994686  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:55.994685  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (3.672273ms)
I0315 18:41:55.994704  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:55.994701678 +0000 UTC m=+189.985478429)
I0315 18:41:55.994723  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-kst7t/status: (3.039243ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33516]
I0315 18:41:55.994723  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.994814  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 4->6, availableReplicas 0->0, sequence No: 1->1
I0315 18:41:55.996954  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (1.888636ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33558]
I0315 18:41:55.997158  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-nflf7/status: (1.981896ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33560]
I0315 18:41:55.997131  121247 replica_set.go:338] Pod deployment-74fb8955d7-nflf7 updated, objectMeta {Name:deployment-74fb8955d7-nflf7 GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-nflf7 UID:01722fb8-4752-11e9-8860-0242ac110002 ResourceVersion:19259 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00be4dfe7 BlockOwnerDeletion:0xc00be4dfe8}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-nflf7 GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-nflf7 UID:01722fb8-4752-11e9-8860-0242ac110002 ResourceVersion:19337 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc01093ce07 BlockOwnerDeletion:0xc01093ce08}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:55.997193  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (2.472058ms)
I0315 18:41:55.997242  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:55.997344  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 4->7, availableReplicas 0->0, sequence No: 1->1
I0315 18:41:55.997895  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.194941ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33562]
I0315 18:41:55.997963  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:55.998147  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (3.440847ms)
I0315 18:41:55.998177  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:55.998174607 +0000 UTC m=+189.988951364)
I0315 18:41:55.998257  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:55.998553  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:55 +0000 UTC - now: 2019-03-15 18:41:55.998545258 +0000 UTC m=+189.989322015]
I0315 18:41:55.998598  121247 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0315 18:41:55.998618  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (441.149µs)
I0315 18:41:55.999227  121247 store.go:355] GuaranteedUpdate of /f06024a8-005a-4edd-b63e-5399f29b2095/replicasets/test-deployment-available-condition/deployment-74fb8955d7 failed because of a conflict, going to retry
I0315 18:41:55.999474  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (1.925191ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33560]
I0315 18:41:55.999517  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-rt24x/status: (1.873922ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0315 18:41:55.999747  121247 replica_set.go:338] Pod deployment-74fb8955d7-rt24x updated, objectMeta {Name:deployment-74fb8955d7-rt24x GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-rt24x UID:01723a2c-4752-11e9-8860-0242ac110002 ResourceVersion:19260 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00c16a807 BlockOwnerDeletion:0xc00c16a808}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-rt24x GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-rt24x UID:01723a2c-4752-11e9-8860-0242ac110002 ResourceVersion:19340 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[pod-template-hash:74fb8955d7 name:test] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc01093d517 BlockOwnerDeletion:0xc01093d518}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:55.999810  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:56.000036  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:56.000055  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.000052337 +0000 UTC m=+189.990829094)
I0315 18:41:56.001951  121247 replica_set.go:338] Pod deployment-74fb8955d7-vnbvp updated, objectMeta {Name:deployment-74fb8955d7-vnbvp GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-vnbvp UID:01ebaa66-4752-11e9-8860-0242ac110002 ResourceVersion:19312 Generation:0 CreationTimestamp:2019-03-15 18:41:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00cad8517 BlockOwnerDeletion:0xc00cad8518}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-vnbvp GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-vnbvp UID:01ebaa66-4752-11e9-8860-0242ac110002 ResourceVersion:19341 Generation:0 CreationTimestamp:2019-03-15 18:41:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc0109c7d77 BlockOwnerDeletion:0xc0109c7d78}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:56.002063  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:56.002888  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:56.023389  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7: (23.625426ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33560]
I0315 18:41:56.023392  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-vnbvp/status: (23.495289ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0315 18:41:56.023635  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (22.715499ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33564]
I0315 18:41:56.023736  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 6->7, availableReplicas 0->0, sequence No: 1->1
I0315 18:41:56.023962  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (23.903639ms)
I0315 18:41:56.024014  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.024011388 +0000 UTC m=+190.014788140)
I0315 18:41:56.024582  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:56 +0000 UTC - now: 2019-03-15 18:41:56.024573262 +0000 UTC m=+190.015350017]
I0315 18:41:56.024659  121247 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0315 18:41:56.024684  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (669.702µs)
I0315 18:41:56.026419  121247 wrap.go:47] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-vv5jt/status: (2.436812ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33560]
I0315 18:41:56.026772  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (2.814215ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33564]
I0315 18:41:56.026783  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:56.026897  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.026892376 +0000 UTC m=+190.017669130)
I0315 18:41:56.026904  121247 replica_set.go:338] Pod deployment-74fb8955d7-vv5jt updated, objectMeta {Name:deployment-74fb8955d7-vv5jt GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-vv5jt UID:017309a9-4752-11e9-8860-0242ac110002 ResourceVersion:19266 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc00c2ca077 BlockOwnerDeletion:0xc00c2ca078}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-74fb8955d7-vv5jt GenerateName:deployment-74fb8955d7- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-74fb8955d7-vv5jt UID:017309a9-4752-11e9-8860-0242ac110002 ResourceVersion:19344 Generation:0 CreationTimestamp:2019-03-15 18:41:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:74fb8955d7] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-74fb8955d7 UID:0170fb85-4752-11e9-8860-0242ac110002 Controller:0xc010de47b7 BlockOwnerDeletion:0xc010de47b8}] Initializers:nil Finalizers:[] ClusterName: ManagedFields:[]}.
I0315 18:41:56.026981  121247 replica_set.go:348] ReplicaSet "deployment-74fb8955d7" will be enqueued after 3600s for availability check
I0315 18:41:56.027129  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (29.888062ms)
I0315 18:41:56.027178  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:56.027320  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 7->10, availableReplicas 0->0, sequence No: 1->1
I0315 18:41:56.030257  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (2.698209ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33560]
I0315 18:41:56.030539  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (3.364573ms)
I0315 18:41:56.030676  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:56.030683  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:56.030822  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (160.196µs)
I0315 18:41:56.031748  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (4.153035ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33562]
I0315 18:41:56.032480  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (5.582881ms)
I0315 18:41:56.032516  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.032513384 +0000 UTC m=+190.023290139)
I0315 18:41:56.032481  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:56.035165  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.076793ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33562]
I0315 18:41:56.035375  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:56.035467  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (2.948357ms)
I0315 18:41:56.035509  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.035506256 +0000 UTC m=+190.026283011)
I0315 18:41:56.035850  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:56 +0000 UTC - now: 2019-03-15 18:41:56.035844508 +0000 UTC m=+190.026621262]
I0315 18:41:56.035887  121247 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0315 18:41:56.035905  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (396.417µs)
I0315 18:41:56.061458  121247 request.go:530] Throttling request took 193.497377ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/events
I0315 18:41:56.064414  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.600879ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33562]
I0315 18:41:56.153495  121247 request.go:530] Throttling request took 126.736894ms, request: GET:http://127.0.0.1:45991/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0315 18:41:56.168721  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (14.920929ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0315 18:41:56.261402  121247 request.go:530] Throttling request took 196.568273ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/events
I0315 18:41:56.263991  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.262904ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33562]
I0315 18:41:56.353493  121247 request.go:530] Throttling request took 184.143082ms, request: GET:http://127.0.0.1:45991/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0315 18:41:56.355654  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.874802ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0315 18:41:56.461412  121247 request.go:530] Throttling request took 196.966461ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/events
I0315 18:41:56.464416  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.616825ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33562]
I0315 18:41:56.553483  121247 request.go:530] Throttling request took 197.388689ms, request: GET:http://127.0.0.1:45991/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0315 18:41:56.568833  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (15.035322ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0315 18:41:56.662305  121247 request.go:530] Throttling request took 197.267211ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/events
I0315 18:41:56.664503  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (1.868935ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33562]
I0315 18:41:56.753507  121247 request.go:530] Throttling request took 184.167192ms, request: GET:http://127.0.0.1:45991/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0315 18:41:56.755691  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.880989ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0315 18:41:56.861427  121247 request.go:530] Throttling request took 196.522325ms, request: POST:http://127.0.0.1:45991/api/v1/namespaces/test-deployment-available-condition/events
I0315 18:41:56.864250  121247 wrap.go:47] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.488971ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33562]
I0315 18:41:56.953498  121247 request.go:530] Throttling request took 197.264472ms, request: PUT:http://127.0.0.1:45991/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0315 18:41:56.958148  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (4.295362ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0315 18:41:56.959127  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:56.959172  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.95916766 +0000 UTC m=+190.949944441)
I0315 18:41:56.962921  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7: (3.092207ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33562]
I0315 18:41:56.962961  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:56.963066  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:56.963084  121247 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-74fb8955d7, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 10->10, availableReplicas 0->9, sequence No: 1->2
I0315 18:41:56.965569  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7: (1.905653ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33560]
I0315 18:41:56.965900  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (6.726164ms)
I0315 18:41:56.965917  121247 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on replicasets.apps "deployment-74fb8955d7": the object has been modified; please apply your changes to the latest version and try again
I0315 18:41:56.965946  121247 deployment_controller.go:280] ReplicaSet deployment-74fb8955d7 updated.
I0315 18:41:56.965946  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.965944155 +0000 UTC m=+190.956720896)
I0315 18:41:56.966649  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-74fb8955d7/status: (3.275862ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:33562]
I0315 18:41:56.966852  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (3.90232ms)
I0315 18:41:56.966882  121247 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-74fb8955d7", timestamp:time.Time{wall:0xbf1b1a5cb3521221, ext:188851793375, loc:(*time.Location)(0x8e10020)}}
I0315 18:41:56.966971  121247 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-74fb8955d7" (95.703µs)
I0315 18:41:56.969709  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.112302ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33560]
I0315 18:41:56.969961  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (4.012098ms)
I0315 18:41:56.969988  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.969984992 +0000 UTC m=+190.960761741)
I0315 18:41:56.970701  121247 deployment_controller.go:175] Updating deployment deployment
I0315 18:41:56.972163  121247 wrap.go:47] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (1.413217ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:33560]
I0315 18:41:56.972365  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (2.37582ms)
I0315 18:41:56.972378  121247 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on deployments.apps "deployment": the object has been modified; please apply your changes to the latest version and try again
I0315 18:41:56.972399  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.972397513 +0000 UTC m=+190.963174255)
I0315 18:41:56.972692  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:56 +0000 UTC - now: 2019-03-15 18:41:56.9726874 +0000 UTC m=+190.963464141]
I0315 18:41:56.972716  121247 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0315 18:41:56.972725  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (325.858µs)
I0315 18:41:56.981319  121247 deployment_controller.go:562] Started syncing deployment "test-deployment-available-condition/deployment" (2019-03-15 18:41:56.981307754 +0000 UTC m=+190.972084514)
I0315 18:41:56.981812  121247 deployment_util.go:795] Deployment "deployment" timed out (false) [last progress check: 2019-03-15 18:41:56 +0000 UTC - now: 2019-03-15 18:41:56.98180444 +0000 UTC m=+190.972581188]
I0315 18:41:56.981859  121247 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0315 18:41:56.981874  121247 deployment_controller.go:564] Finished syncing deployment "test-deployment-available-condition/deployment" (563.436µs)
I0315 18:41:57.153477  121247 request.go:530] Throttling request took 194.850822ms, request: GET:http://127.0.0.1:45991/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0315 18:41:57.156018  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.183112ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33560]
I0315 18:41:57.353462  121247 request.go:530] Throttling request took 196.933801ms, request: GET:http://127.0.0.1:45991/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0315 18:41:57.355857  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.072329ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33560]
I0315 18:41:57.553507  121247 request.go:530] Throttling request took 197.163971ms, request: GET:http://127.0.0.1:45991/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0315 18:41:57.555943  121247 wrap.go:47] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.149884ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33560]
I0315 18:41:57.556354  121247 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0315 18:41:57.556696  121247 deployment_controller.go:164] Shutting down deployment controller
I0315 18:41:57.556737  121247 replica_set.go:194] Shutting down replicaset controller
I0315 18:41:57.557066  121247 wrap.go:47] GET /api/v1/pods?resourceVersion=18886&timeout=7m7s&timeoutSeconds=427&watch=true: (2.799247009s) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:33504]
I0315 18:41:57.557138  121247 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=18889&timeout=9m43s&timeoutSeconds=583&watch=true: (2.799463886s) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:33164]
I0315 18:41:57.557243  121247 wrap.go:47] GET /apis/apps/v1/deployments?resourceVersion=19245&timeout=9m27s&timeoutSeconds=567&watch=true: (2.799588858s) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:33502]
I0315 18:41:57.573290  121247 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.62574ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33560]
I0315 18:41:57.575927  121247 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.199862ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33560]
util.go:205: Updating deployment deployment
deployment_test.go:1174: unexpected .replicas: expect 10, got 9
				from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190315-183649.xml

Find deployment-74fb8955d7-2td6s mentions in log files | View test history on testgrid


Show 648 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 303 lines ...
W0315 18:31:09.549] I0315 18:31:09.548415   55641 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0315 18:31:09.549] I0315 18:31:09.548511   55641 server.go:559] external host was not specified, using 172.17.0.2
W0315 18:31:09.549] W0315 18:31:09.548525   55641 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0315 18:31:09.549] I0315 18:31:09.548747   55641 server.go:146] Version: v1.15.0-alpha.0.1226+b0494b081d5c97
W0315 18:31:09.819] I0315 18:31:09.819302   55641 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0315 18:31:09.820] I0315 18:31:09.819341   55641 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0315 18:31:09.821] E0315 18:31:09.821370   55641 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:09.822] E0315 18:31:09.821414   55641 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:09.822] E0315 18:31:09.821663   55641 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:09.822] E0315 18:31:09.821733   55641 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:09.823] E0315 18:31:09.821761   55641 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:09.823] E0315 18:31:09.821791   55641 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:09.823] I0315 18:31:09.821825   55641 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0315 18:31:09.823] I0315 18:31:09.821835   55641 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0315 18:31:09.824] I0315 18:31:09.824273   55641 clientconn.go:551] parsed scheme: ""
W0315 18:31:09.824] I0315 18:31:09.824308   55641 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 18:31:09.825] I0315 18:31:09.824373   55641 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 18:31:09.825] I0315 18:31:09.824424   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0315 18:31:10.321] W0315 18:31:10.320704   55641 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0315 18:31:10.817] I0315 18:31:10.816990   55641 clientconn.go:551] parsed scheme: ""
W0315 18:31:10.817] I0315 18:31:10.817030   55641 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 18:31:10.818] I0315 18:31:10.817075   55641 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 18:31:10.818] I0315 18:31:10.817179   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:31:10.823] I0315 18:31:10.822908   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:31:11.077] E0315 18:31:11.076786   55641 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:11.077] E0315 18:31:11.076849   55641 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:11.078] E0315 18:31:11.076928   55641 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:11.078] E0315 18:31:11.076955   55641 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:11.078] E0315 18:31:11.076976   55641 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:11.078] E0315 18:31:11.077020   55641 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0315 18:31:11.078] I0315 18:31:11.077040   55641 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0315 18:31:11.079] I0315 18:31:11.077045   55641 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0315 18:31:11.079] I0315 18:31:11.078355   55641 clientconn.go:551] parsed scheme: ""
W0315 18:31:11.079] I0315 18:31:11.078381   55641 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 18:31:11.079] I0315 18:31:11.078423   55641 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 18:31:11.079] I0315 18:31:11.078489   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 254 lines ...
W0315 18:31:47.387] I0315 18:31:47.387421   58959 controllermanager.go:497] Started "daemonset"
W0315 18:31:47.387] W0315 18:31:47.387454   58959 controllermanager.go:476] "bootstrapsigner" is disabled
W0315 18:31:47.388] W0315 18:31:47.387463   58959 controllermanager.go:489] Skipping "nodeipam"
W0315 18:31:47.388] I0315 18:31:47.387465   58959 daemon_controller.go:267] Starting daemon sets controller
W0315 18:31:47.388] I0315 18:31:47.387483   58959 controller_utils.go:1027] Waiting for caches to sync for daemon sets controller
W0315 18:31:47.388] I0315 18:31:47.388315   58959 node_lifecycle_controller.go:77] Sending events to api server
W0315 18:31:47.388] E0315 18:31:47.388385   58959 core.go:161] failed to start cloud node lifecycle controller: no cloud provider provided
W0315 18:31:47.388] W0315 18:31:47.388396   58959 controllermanager.go:489] Skipping "cloud-node-lifecycle"
W0315 18:31:47.389] I0315 18:31:47.389064   58959 controllermanager.go:497] Started "persistentvolume-binder"
W0315 18:31:47.389] I0315 18:31:47.389096   58959 pv_controller_base.go:270] Starting persistent volume controller
W0315 18:31:47.389] I0315 18:31:47.389322   58959 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller
W0315 18:31:47.389] I0315 18:31:47.389696   58959 controllermanager.go:497] Started "serviceaccount"
W0315 18:31:47.390] I0315 18:31:47.389865   58959 serviceaccounts_controller.go:115] Starting service account controller
... skipping 27 lines ...
W0315 18:31:47.394] I0315 18:31:47.394279   58959 controller_utils.go:1027] Waiting for caches to sync for TTL controller
W0315 18:31:47.395] I0315 18:31:47.395120   58959 controllermanager.go:497] Started "horizontalpodautoscaling"
W0315 18:31:47.395] I0315 18:31:47.395228   58959 horizontal.go:156] Starting HPA controller
W0315 18:31:47.395] I0315 18:31:47.395256   58959 controller_utils.go:1027] Waiting for caches to sync for HPA controller
W0315 18:31:47.395] I0315 18:31:47.395383   58959 controllermanager.go:497] Started "csrcleaner"
W0315 18:31:47.395] I0315 18:31:47.395672   58959 cleaner.go:81] Starting CSR cleaner controller
W0315 18:31:47.396] E0315 18:31:47.396228   58959 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0315 18:31:47.396] W0315 18:31:47.396266   58959 controllermanager.go:489] Skipping "service"
W0315 18:31:47.397] I0315 18:31:47.397213   58959 controllermanager.go:497] Started "deployment"
W0315 18:31:47.397] I0315 18:31:47.397535   58959 deployment_controller.go:152] Starting deployment controller
W0315 18:31:47.398] I0315 18:31:47.397594   58959 controller_utils.go:1027] Waiting for caches to sync for deployment controller
W0315 18:31:47.398] I0315 18:31:47.398086   58959 controllermanager.go:497] Started "replicationcontroller"
W0315 18:31:47.398] I0315 18:31:47.398258   58959 replica_set.go:182] Starting replicationcontroller controller
... skipping 38 lines ...
W0315 18:31:47.837] I0315 18:31:47.601620   58959 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W0315 18:31:47.837] I0315 18:31:47.601639   58959 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0315 18:31:47.837] I0315 18:31:47.601672   58959 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
W0315 18:31:47.838] I0315 18:31:47.601701   58959 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
W0315 18:31:47.838] I0315 18:31:47.601833   58959 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0315 18:31:47.838] I0315 18:31:47.601866   58959 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
W0315 18:31:47.838] E0315 18:31:47.601889   58959 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0315 18:31:47.838] I0315 18:31:47.601912   58959 controllermanager.go:497] Started "resourcequota"
W0315 18:31:47.838] I0315 18:31:47.601975   58959 resource_quota_controller.go:276] Starting resource quota controller
W0315 18:31:47.838] I0315 18:31:47.602002   58959 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
W0315 18:31:47.838] I0315 18:31:47.602095   58959 resource_quota_monitor.go:301] QuotaMonitor running
W0315 18:31:47.839] I0315 18:31:47.610668   58959 controllermanager.go:497] Started "namespace"
W0315 18:31:47.839] I0315 18:31:47.610764   58959 namespace_controller.go:186] Starting namespace controller
... skipping 8 lines ...
W0315 18:31:47.840] I0315 18:31:47.613405   58959 endpoints_controller.go:166] Starting endpoint controller
W0315 18:31:47.840] I0315 18:31:47.613567   58959 controller_utils.go:1027] Waiting for caches to sync for endpoint controller
W0315 18:31:47.840] I0315 18:31:47.614108   58959 controllermanager.go:497] Started "pv-protection"
W0315 18:31:47.840] W0315 18:31:47.614152   58959 controllermanager.go:489] Skipping "csrsigning"
W0315 18:31:47.840] I0315 18:31:47.614798   58959 pv_protection_controller.go:81] Starting PV protection controller
W0315 18:31:47.840] I0315 18:31:47.614816   58959 controller_utils.go:1027] Waiting for caches to sync for PV protection controller
W0315 18:31:47.840] W0315 18:31:47.652062   58959 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0315 18:31:47.841] I0315 18:31:47.681602   58959 controller_utils.go:1034] Caches are synced for ReplicaSet controller
W0315 18:31:47.841] I0315 18:31:47.683080   58959 controller_utils.go:1034] Caches are synced for attach detach controller
W0315 18:31:47.841] I0315 18:31:47.685134   58959 controller_utils.go:1034] Caches are synced for expand controller
W0315 18:31:47.841] I0315 18:31:47.686237   58959 controller_utils.go:1034] Caches are synced for GC controller
W0315 18:31:47.841] I0315 18:31:47.691601   58959 controller_utils.go:1034] Caches are synced for certificate controller
W0315 18:31:47.841] I0315 18:31:47.693789   58959 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
W0315 18:31:47.841] I0315 18:31:47.693792   58959 controller_utils.go:1034] Caches are synced for PVC protection controller
W0315 18:31:47.841] I0315 18:31:47.693994   58959 controller_utils.go:1034] Caches are synced for disruption controller
W0315 18:31:47.841] I0315 18:31:47.694009   58959 disruption.go:294] Sending events to api server.
W0315 18:31:47.842] I0315 18:31:47.694462   58959 controller_utils.go:1034] Caches are synced for TTL controller
W0315 18:31:47.842] I0315 18:31:47.695433   58959 controller_utils.go:1034] Caches are synced for HPA controller
W0315 18:31:47.842] I0315 18:31:47.697775   58959 controller_utils.go:1034] Caches are synced for deployment controller
W0315 18:31:47.842] I0315 18:31:47.698535   58959 controller_utils.go:1034] Caches are synced for ReplicationController controller
W0315 18:31:47.842] E0315 18:31:47.709743   58959 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0315 18:31:47.842] E0315 18:31:47.710691   58959 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0315 18:31:47.842] I0315 18:31:47.710884   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:31:47.843] I0315 18:31:47.711038   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:31:47.843] I0315 18:31:47.711164   58959 controller_utils.go:1034] Caches are synced for namespace controller
W0315 18:31:47.843] I0315 18:31:47.712998   58959 controller_utils.go:1034] Caches are synced for stateful set controller
W0315 18:31:47.843] I0315 18:31:47.714978   58959 controller_utils.go:1034] Caches are synced for PV protection controller
W0315 18:31:47.843] E0315 18:31:47.726835   58959 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0315 18:31:47.888] I0315 18:31:47.887803   58959 controller_utils.go:1034] Caches are synced for daemon sets controller
I0315 18:31:47.989] Successful: the flag '--client' shows correct client info
I0315 18:31:47.989] (BSuccessful: the flag '--client' correctly has no server version info
I0315 18:31:47.989] (B+++ [0315 18:31:47] Testing kubectl version: verify json output
I0315 18:31:48.035] Successful: --output json has correct client info
I0315 18:31:48.041] (BSuccessful: --output json has correct server info
... skipping 42 lines ...
I0315 18:31:49.212] 
I0315 18:31:49.214] +++ Running case: test-cmd.run_kubectl_local_proxy_tests 
I0315 18:31:49.217] +++ working dir: /go/src/k8s.io/kubernetes
I0315 18:31:49.219] +++ command: run_kubectl_local_proxy_tests
I0315 18:31:49.229] +++ [0315 18:31:49] Testing kubectl local proxy
I0315 18:31:49.235] +++ [0315 18:31:49] Starting kubectl proxy on random port; output file in proxy-port.out.esLXb; args: 
W0315 18:31:49.335] E0315 18:31:49.301303   58959 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0315 18:31:49.712] I0315 18:31:49.711942   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:31:49.713] I0315 18:31:49.712189   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:31:49.813] +++ [0315 18:31:49] Attempt 0 to read proxy-port.out.esLXb...
I0315 18:31:49.813] +++ [0315 18:31:49] kubectl proxy running on port 43007
I0315 18:31:49.814] +++ [0315 18:31:49] On try 1, kubectl proxy: ok
I0315 18:31:49.858] +++ [0315 18:31:49] Stopping proxy on port 43007
... skipping 19 lines ...
I0315 18:31:51.177] +++ working dir: /go/src/k8s.io/kubernetes
I0315 18:31:51.179] +++ command: run_RESTMapper_evaluation_tests
I0315 18:31:51.191] +++ [0315 18:31:51] Creating namespace namespace-1552674711-23381
I0315 18:31:51.260] namespace/namespace-1552674711-23381 created
I0315 18:31:51.329] Context "test" modified.
I0315 18:31:51.336] +++ [0315 18:31:51] Testing RESTMapper
I0315 18:31:51.449] +++ [0315 18:31:51] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0315 18:31:51.464] +++ exit code: 0
I0315 18:31:51.590] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0315 18:31:51.591] bindings                                                                      true         Binding
I0315 18:31:51.591] componentstatuses                 cs                                          false        ComponentStatus
I0315 18:31:51.591] configmaps                        cm                                          true         ConfigMap
I0315 18:31:51.592] endpoints                         ep                                          true         Endpoints
... skipping 694 lines ...
I0315 18:32:10.695] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0315 18:32:10.791] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0315 18:32:10.866] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0315 18:32:10.971] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0315 18:32:11.134] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:32:11.339] (Bpod/env-test-pod created
W0315 18:32:11.440] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0315 18:32:11.440] I0315 18:32:08.722374   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:11.440] I0315 18:32:08.722632   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:11.440] error: setting 'all' parameter but found a non empty selector. 
W0315 18:32:11.441] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 18:32:11.441] I0315 18:32:09.722948   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:11.441] I0315 18:32:09.723151   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:11.441] I0315 18:32:10.344036   55641 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0315 18:32:11.441] I0315 18:32:10.723513   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:11.441] I0315 18:32:10.723742   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:11.441] error: min-available and max-unavailable cannot be both specified
I0315 18:32:11.542] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0315 18:32:11.542] Name:               env-test-pod
I0315 18:32:11.542] Namespace:          test-kubectl-describe-pod
I0315 18:32:11.542] Priority:           0
I0315 18:32:11.543] PriorityClassName:  <none>
I0315 18:32:11.543] Node:               <none>
... skipping 169 lines ...
W0315 18:32:23.407] I0315 18:32:22.730908   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:23.407] I0315 18:32:22.960533   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674738-19360", Name:"modified", UID:"ac8f2a26-4750-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"379", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-vfftr
I0315 18:32:23.564] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:32:23.710] (Bpod/valid-pod created
I0315 18:32:23.805] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 18:32:23.952] (BSuccessful
I0315 18:32:23.952] message:Error from server: cannot restore map from string
I0315 18:32:23.952] has:cannot restore map from string
I0315 18:32:24.038] Successful
I0315 18:32:24.038] message:pod/valid-pod patched (no change)
I0315 18:32:24.038] has:patched (no change)
I0315 18:32:24.131] pod/valid-pod patched
I0315 18:32:24.234] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0315 18:32:24.744] (Bpod/valid-pod patched
I0315 18:32:24.836] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0315 18:32:24.910] (Bpod/valid-pod patched
I0315 18:32:25.003] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0315 18:32:25.159] (Bpod/valid-pod patched
I0315 18:32:25.257] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0315 18:32:25.431] (B+++ [0315 18:32:25] "kubectl patch with resourceVersion 498" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0315 18:32:25.532] I0315 18:32:23.731131   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:25.532] I0315 18:32:23.731334   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:25.532] E0315 18:32:23.943700   55641 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
W0315 18:32:25.532] I0315 18:32:24.731543   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:25.533] I0315 18:32:24.731797   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:32:25.668] pod "valid-pod" deleted
I0315 18:32:25.681] pod/valid-pod replaced
I0315 18:32:25.775] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0315 18:32:25.933] (BSuccessful
I0315 18:32:25.933] message:error: --grace-period must have --force specified
I0315 18:32:25.933] has:\-\-grace-period must have \-\-force specified
W0315 18:32:26.034] I0315 18:32:25.732126   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:26.034] I0315 18:32:25.732372   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:32:26.134] Successful
I0315 18:32:26.135] message:error: --timeout must have --force specified
I0315 18:32:26.135] has:\-\-timeout must have \-\-force specified
I0315 18:32:26.251] node/node-v1-test created
W0315 18:32:26.352] W0315 18:32:26.251452   58959 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0315 18:32:26.453] node/node-v1-test replaced
I0315 18:32:26.505] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0315 18:32:26.577] (Bnode "node-v1-test" deleted
I0315 18:32:26.672] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0315 18:32:26.943] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0315 18:32:27.923] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 61 lines ...
W0315 18:32:32.232] Edit cancelled, no changes made.
W0315 18:32:32.232] Edit cancelled, no changes made.
W0315 18:32:32.232] Edit cancelled, no changes made.
W0315 18:32:32.232] I0315 18:32:27.733106   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:32.233] I0315 18:32:27.733314   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:32.233] Edit cancelled, no changes made.
W0315 18:32:32.233] error: 'name' already has a value (valid-pod), and --overwrite is false
W0315 18:32:32.233] I0315 18:32:28.733559   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:32.233] I0315 18:32:28.733756   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:32.233] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 18:32:32.233] I0315 18:32:29.734017   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:32.233] I0315 18:32:29.734179   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:32.234] I0315 18:32:30.734512   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
... skipping 55 lines ...
I0315 18:32:35.577] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0315 18:32:35.579] +++ working dir: /go/src/k8s.io/kubernetes
I0315 18:32:35.582] +++ command: run_kubectl_create_error_tests
I0315 18:32:35.593] +++ [0315 18:32:35] Creating namespace namespace-1552674755-32754
I0315 18:32:35.668] namespace/namespace-1552674755-32754 created
I0315 18:32:35.741] Context "test" modified.
I0315 18:32:35.748] +++ [0315 18:32:35] Testing kubectl create with error
W0315 18:32:35.849] I0315 18:32:35.737637   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:35.849] I0315 18:32:35.737853   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:35.849] Error: must specify one of -f and -k
W0315 18:32:35.850] 
W0315 18:32:35.850] Create a resource from a file or from stdin.
W0315 18:32:35.850] 
W0315 18:32:35.850]  JSON and YAML formats are accepted.
W0315 18:32:35.850] 
W0315 18:32:35.850] Examples:
... skipping 41 lines ...
W0315 18:32:35.858] 
W0315 18:32:35.858] Usage:
W0315 18:32:35.858]   kubectl create -f FILENAME [options]
W0315 18:32:35.858] 
W0315 18:32:35.858] Use "kubectl <command> --help" for more information about a given command.
W0315 18:32:35.858] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0315 18:32:36.001] +++ [0315 18:32:35] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0315 18:32:36.102] kubectl convert is DEPRECATED and will be removed in a future version.
W0315 18:32:36.102] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0315 18:32:36.203] +++ exit code: 0
I0315 18:32:36.238] Recording: run_kubectl_apply_tests
I0315 18:32:36.238] Running command: run_kubectl_apply_tests
I0315 18:32:36.259] 
... skipping 23 lines ...
W0315 18:32:38.459] I0315 18:32:38.458795   55641 clientconn.go:551] parsed scheme: ""
W0315 18:32:38.460] I0315 18:32:38.458823   55641 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 18:32:38.460] I0315 18:32:38.458853   55641 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 18:32:38.460] I0315 18:32:38.458888   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:32:38.461] I0315 18:32:38.460063   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:32:38.461] I0315 18:32:38.461269   55641 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0315 18:32:38.557] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0315 18:32:38.658] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0315 18:32:38.679] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0315 18:32:38.713] +++ exit code: 0
I0315 18:32:38.799] Recording: run_kubectl_run_tests
I0315 18:32:38.800] Running command: run_kubectl_run_tests
I0315 18:32:38.820] 
... skipping 105 lines ...
W0315 18:32:42.073] I0315 18:32:41.740783   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:42.074] I0315 18:32:41.741680   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:32:42.174] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:32:42.287] (Bpod/selector-test-pod created
I0315 18:32:42.419] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0315 18:32:42.528] (BSuccessful
I0315 18:32:42.529] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0315 18:32:42.530] has:pods "selector-test-pod-dont-apply" not found
I0315 18:32:42.632] pod "selector-test-pod" deleted
I0315 18:32:42.653] +++ exit code: 0
I0315 18:32:42.722] Recording: run_kubectl_apply_deployments_tests
I0315 18:32:42.723] Running command: run_kubectl_apply_deployments_tests
I0315 18:32:42.742] 
... skipping 33 lines ...
I0315 18:32:44.967] replicaset.extensions "my-depl-656cffcbcc" deleted
I0315 18:32:44.974] pod "my-depl-64775887d7-ndfsl" deleted
I0315 18:32:44.979] pod "my-depl-656cffcbcc-wsnsl" deleted
W0315 18:32:45.080] I0315 18:32:44.742545   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:45.081] I0315 18:32:44.743093   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:45.081] I0315 18:32:44.962318   55641 controller.go:606] quota admission added evaluator for: replicasets.extensions
W0315 18:32:45.081] E0315 18:32:45.001134   58959 replica_set.go:450] Sync "namespace-1552674762-2825/my-depl-656cffcbcc" failed with replicasets.apps "my-depl-656cffcbcc" not found
I0315 18:32:45.181] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:32:45.209] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:32:45.298] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:32:45.388] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:32:45.571] (Bdeployment.extensions/nginx created
W0315 18:32:45.673] I0315 18:32:45.585395   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674762-2825", Name:"nginx", UID:"ba09bfbb-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0315 18:32:45.673] I0315 18:32:45.594499   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674762-2825", Name:"nginx-776cc67f78", UID:"ba0acd5f-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-f9fbs
W0315 18:32:45.674] I0315 18:32:45.604672   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674762-2825", Name:"nginx-776cc67f78", UID:"ba0acd5f-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-2hzwc
W0315 18:32:45.674] I0315 18:32:45.624394   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674762-2825", Name:"nginx-776cc67f78", UID:"ba0acd5f-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-bdptf
W0315 18:32:45.743] I0315 18:32:45.742989   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:45.744] I0315 18:32:45.743256   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:32:45.844] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0315 18:32:50.020] (BSuccessful
I0315 18:32:50.020] message:Error from server (Conflict): error when applying patch:
I0315 18:32:50.021] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552674762-2825\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0315 18:32:50.021] to:
I0315 18:32:50.021] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0315 18:32:50.022] Name: "nginx", Namespace: "namespace-1552674762-2825"
I0315 18:32:50.024] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["namespace":"namespace-1552674762-2825" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552674762-2825/deployments/nginx" "uid":"ba09bfbb-4750-11e9-ab52-0242ac110002" "resourceVersion":"607" "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552674762-2825\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "name":"nginx" "generation":'\x01' "creationTimestamp":"2019-03-15T18:32:45Z" "labels":map["name":"nginx"] "managedFields":[map["fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[] "f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map["f:reason":map[] "f:status":map[] "f:type":map[] ".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[]]] "f:observedGeneration":map[]]] "manager":"kube-controller-manager" "operation":"Update" "apiVersion":"apps/v1" "time":"2019-03-15T18:32:45Z"] map["manager":"kubectl" "operation":"Update" "apiVersion":"extensions/v1beta1" "time":"2019-03-15T18:32:45Z" "fields":map["f:metadata":map["f:annotations":map["f:kubectl.kubernetes.io/last-applied-configuration":map[] ".":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:spec":map["f:terminationGracePeriodSeconds":map[] "f:containers":map["k:{\"name\":\"nginx\"}":map["f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[] ".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[]] "f:metadata":map["f:labels":map[".":map[] "f:name":map[]]]] "f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]]]]]]] "spec":map["selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]]]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03'] "status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2019-03-15T18:32:45Z" "lastTransitionTime":"2019-03-15T18:32:45Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]]]]}
I0315 18:32:50.025] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0315 18:32:50.025] has:Error from server (Conflict)
W0315 18:32:50.125] I0315 18:32:46.743444   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:50.126] I0315 18:32:46.743660   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:50.126] I0315 18:32:47.743859   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:50.127] I0315 18:32:47.744092   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:32:50.127] I0315 18:32:48.744319   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:32:50.127] I0315 18:32:48.744516   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
... skipping 201 lines ...
I0315 18:33:02.632] +++ [0315 18:33:02] Creating namespace namespace-1552674782-24239
I0315 18:33:02.703] namespace/namespace-1552674782-24239 created
I0315 18:33:02.768] Context "test" modified.
I0315 18:33:02.776] +++ [0315 18:33:02] Testing kubectl get
I0315 18:33:02.856] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:33:02.934] (BSuccessful
I0315 18:33:02.934] message:Error from server (NotFound): pods "abc" not found
I0315 18:33:02.934] has:pods "abc" not found
I0315 18:33:03.019] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:33:03.096] (BSuccessful
I0315 18:33:03.097] message:Error from server (NotFound): pods "abc" not found
I0315 18:33:03.097] has:pods "abc" not found
I0315 18:33:03.183] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:33:03.260] (BSuccessful
I0315 18:33:03.260] message:{
I0315 18:33:03.260]     "apiVersion": "v1",
I0315 18:33:03.260]     "items": [],
... skipping 23 lines ...
I0315 18:33:03.570] has not:No resources found
I0315 18:33:03.646] Successful
I0315 18:33:03.647] message:NAME
I0315 18:33:03.647] has not:No resources found
I0315 18:33:03.730] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:33:03.821] (BSuccessful
I0315 18:33:03.822] message:error: the server doesn't have a resource type "foobar"
I0315 18:33:03.822] has not:No resources found
I0315 18:33:03.903] Successful
I0315 18:33:03.904] message:No resources found.
I0315 18:33:03.904] has:No resources found
I0315 18:33:03.981] Successful
I0315 18:33:03.981] message:
I0315 18:33:03.982] has not:No resources found
I0315 18:33:04.059] Successful
I0315 18:33:04.059] message:No resources found.
I0315 18:33:04.059] has:No resources found
I0315 18:33:04.152] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:33:04.234] (BSuccessful
I0315 18:33:04.234] message:Error from server (NotFound): pods "abc" not found
I0315 18:33:04.234] has:pods "abc" not found
I0315 18:33:04.235] FAIL!
I0315 18:33:04.236] message:Error from server (NotFound): pods "abc" not found
I0315 18:33:04.236] has not:List
I0315 18:33:04.236] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
W0315 18:33:04.336] I0315 18:33:02.751466   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:33:04.337] I0315 18:33:02.751669   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:33:04.337] I0315 18:33:03.751932   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:33:04.337] I0315 18:33:03.752144   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
... skipping 717 lines ...
I0315 18:33:07.757] }
I0315 18:33:07.838] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 18:33:08.074] (B<no value>Successful
I0315 18:33:08.075] message:valid-pod:
I0315 18:33:08.075] has:valid-pod:
I0315 18:33:08.161] Successful
I0315 18:33:08.161] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0315 18:33:08.161] 	template was:
I0315 18:33:08.161] 		{.missing}
I0315 18:33:08.161] 	object given to jsonpath engine was:
I0315 18:33:08.163] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"time":"2019-03-15T18:33:07Z", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:terminationGracePeriodSeconds":map[string]interface {}{}, "f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{"f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}, ".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "apiVersion":"v1"}}, "name":"valid-pod", "namespace":"namespace-1552674787-20281", "selfLink":"/api/v1/namespaces/namespace-1552674787-20281/pods/valid-pod", "uid":"c734d01b-4750-11e9-ab52-0242ac110002", "resourceVersion":"706", "creationTimestamp":"2019-03-15T18:33:07Z"}, "spec":map[string]interface {}{"schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}}, "status":map[string]interface {}{"qosClass":"Guaranteed", "phase":"Pending"}}
I0315 18:33:08.163] has:missing is not found
I0315 18:33:08.245] Successful
I0315 18:33:08.246] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0315 18:33:08.246] 	template was:
I0315 18:33:08.246] 		{{.missing}}
I0315 18:33:08.246] 	raw data was:
I0315 18:33:08.247] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T18:33:07Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-03-15T18:33:07Z"}],"name":"valid-pod","namespace":"namespace-1552674787-20281","resourceVersion":"706","selfLink":"/api/v1/namespaces/namespace-1552674787-20281/pods/valid-pod","uid":"c734d01b-4750-11e9-ab52-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0315 18:33:08.247] 	object given to template engine was:
I0315 18:33:08.248] 		map[apiVersion:v1 kind:Pod metadata:map[resourceVersion:706 selfLink:/api/v1/namespaces/namespace-1552674787-20281/pods/valid-pod uid:c734d01b-4750-11e9-ab52-0242ac110002 creationTimestamp:2019-03-15T18:33:07Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:spec:map[f:terminationGracePeriodSeconds:map[] f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[f:terminationMessagePath:map[] f:terminationMessagePolicy:map[] .:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[]] f:metadata:map[f:labels:map[.:map[] f:name:map[]]]] manager:kubectl operation:Update time:2019-03-15T18:33:07Z]] name:valid-pod namespace:namespace-1552674787-20281] spec:map[enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log]] dnsPolicy:ClusterFirst] status:map[phase:Pending qosClass:Guaranteed]]
I0315 18:33:08.248] has:map has no entry for key "missing"
W0315 18:33:08.348] I0315 18:33:07.753995   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:33:08.349] I0315 18:33:07.754270   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:33:08.349] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0315 18:33:08.755] I0315 18:33:08.754594   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:33:08.755] I0315 18:33:08.754862   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:33:09.326] E0315 18:33:09.325957   70492 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0315 18:33:09.427] Successful
I0315 18:33:09.427] message:NAME        READY   STATUS    RESTARTS   AGE
I0315 18:33:09.427] valid-pod   0/1     Pending   0          1s
... skipping 158 lines ...
I0315 18:33:11.609]   terminationGracePeriodSeconds: 30
I0315 18:33:11.609] status:
I0315 18:33:11.609]   phase: Pending
I0315 18:33:11.609]   qosClass: Guaranteed
I0315 18:33:11.609] has:name: valid-pod
I0315 18:33:11.609] Successful
I0315 18:33:11.609] message:Error from server (NotFound): pods "invalid-pod" not found
I0315 18:33:11.609] has:"invalid-pod" not found
I0315 18:33:11.672] pod "valid-pod" deleted
I0315 18:33:11.765] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:33:11.932] (Bpod/redis-master created
I0315 18:33:11.938] pod/valid-pod created
I0315 18:33:12.031] Successful
... skipping 282 lines ...
I0315 18:33:17.158] Running command: run_create_secret_tests
I0315 18:33:17.184] 
I0315 18:33:17.187] +++ Running case: test-cmd.run_create_secret_tests 
I0315 18:33:17.189] +++ working dir: /go/src/k8s.io/kubernetes
I0315 18:33:17.192] +++ command: run_create_secret_tests
I0315 18:33:17.287] Successful
I0315 18:33:17.287] message:Error from server (NotFound): secrets "mysecret" not found
I0315 18:33:17.287] has:secrets "mysecret" not found
W0315 18:33:17.388] I0315 18:33:16.322114   55641 clientconn.go:551] parsed scheme: ""
W0315 18:33:17.388] I0315 18:33:16.322153   55641 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 18:33:17.388] I0315 18:33:16.322187   55641 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 18:33:17.389] I0315 18:33:16.322265   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:33:17.389] I0315 18:33:16.322647   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:33:17.389] No resources found.
W0315 18:33:17.389] No resources found.
W0315 18:33:17.389] I0315 18:33:16.758637   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:33:17.389] I0315 18:33:16.758804   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:33:17.490] Successful
I0315 18:33:17.490] message:Error from server (NotFound): secrets "mysecret" not found
I0315 18:33:17.490] has:secrets "mysecret" not found
I0315 18:33:17.491] Successful
I0315 18:33:17.491] message:user-specified
I0315 18:33:17.491] has:user-specified
I0315 18:33:17.530] Successful
I0315 18:33:17.615] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"cd235197-4750-11e9-ab52-0242ac110002","resourceVersion":"813","creationTimestamp":"2019-03-15T18:33:17Z"}}
... skipping 178 lines ...
I0315 18:33:21.523] has:Timeout exceeded while reading body
I0315 18:33:21.597] Successful
I0315 18:33:21.598] message:NAME        READY   STATUS    RESTARTS   AGE
I0315 18:33:21.598] valid-pod   0/1     Pending   0          1s
I0315 18:33:21.598] has:valid-pod
I0315 18:33:21.667] Successful
I0315 18:33:21.667] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0315 18:33:21.668] has:Invalid timeout value
I0315 18:33:21.742] pod "valid-pod" deleted
I0315 18:33:21.763] +++ exit code: 0
I0315 18:33:21.838] Recording: run_crd_tests
I0315 18:33:21.838] Running command: run_crd_tests
I0315 18:33:21.859] 
... skipping 249 lines ...
I0315 18:33:26.893] foo.company.com/test patched
I0315 18:33:26.931] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0315 18:33:27.056] (Bfoo.company.com/test patched
I0315 18:33:27.202] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0315 18:33:27.398] (Bfoo.company.com/test patched
I0315 18:33:27.530] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0315 18:33:27.748] (B+++ [0315 18:33:27] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0315 18:33:27.838] {
I0315 18:33:27.839]     "apiVersion": "company.com/v1",
I0315 18:33:27.839]     "kind": "Foo",
I0315 18:33:27.839]     "metadata": {
I0315 18:33:27.839]         "annotations": {
I0315 18:33:27.839]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 303 lines ...
W0315 18:33:50.187] I0315 18:33:50.186549   58959 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W0315 18:33:50.188] I0315 18:33:50.187538   55641 clientconn.go:551] parsed scheme: ""
W0315 18:33:50.188] I0315 18:33:50.187576   55641 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0315 18:33:50.188] I0315 18:33:50.187620   55641 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0315 18:33:50.188] I0315 18:33:50.187658   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:33:50.189] I0315 18:33:50.188803   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:33:50.212] E0315 18:33:50.211081   58959 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"]
W0315 18:33:50.287] I0315 18:33:50.286890   58959 controller_utils.go:1034] Caches are synced for garbage collector controller
I0315 18:33:50.392] crd.sh:321: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:33:50.563] (Bfoo.company.com/test created
I0315 18:33:50.660] crd.sh:327: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test:
I0315 18:33:50.743] (Bcrd.sh:330: Successful get foos/test {{.someField}}: field1
I0315 18:33:50.906] (Bfoo.company.com/test unchanged
... skipping 97 lines ...
I0315 18:34:02.392] +++ [0315 18:34:02] Testing cmd with image
I0315 18:34:02.478] Successful
I0315 18:34:02.479] message:deployment.apps/test1 created
I0315 18:34:02.479] has:deployment.apps/test1 created
I0315 18:34:02.561] deployment.extensions "test1" deleted
I0315 18:34:02.642] Successful
I0315 18:34:02.643] message:error: Invalid image name "InvalidImageName": invalid reference format
I0315 18:34:02.643] has:error: Invalid image name "InvalidImageName": invalid reference format
I0315 18:34:02.658] +++ exit code: 0
I0315 18:34:02.715] +++ [0315 18:34:02] Testing recursive resources
I0315 18:34:02.721] +++ [0315 18:34:02] Creating namespace namespace-1552674842-30210
I0315 18:34:02.790] namespace/namespace-1552674842-30210 created
I0315 18:34:02.855] Context "test" modified.
I0315 18:34:02.948] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:34:03.216] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:03.218] (BSuccessful
I0315 18:34:03.219] message:pod/busybox0 created
I0315 18:34:03.219] pod/busybox1 created
I0315 18:34:03.219] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0315 18:34:03.219] has:error validating data: kind not set
I0315 18:34:03.325] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:03.503] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0315 18:34:03.505] (BSuccessful
I0315 18:34:03.506] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:03.506] has:Object 'Kind' is missing
I0315 18:34:03.595] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:03.858] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0315 18:34:03.861] (BSuccessful
I0315 18:34:03.861] message:pod/busybox0 replaced
I0315 18:34:03.861] pod/busybox1 replaced
I0315 18:34:03.861] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0315 18:34:03.861] has:error validating data: kind not set
I0315 18:34:03.950] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:04.043] (BSuccessful
I0315 18:34:04.043] message:Name:               busybox0
I0315 18:34:04.043] Namespace:          namespace-1552674842-30210
I0315 18:34:04.043] Priority:           0
I0315 18:34:04.044] PriorityClassName:  <none>
... skipping 159 lines ...
I0315 18:34:04.056] has:Object 'Kind' is missing
I0315 18:34:04.142] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:04.320] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0315 18:34:04.323] (BSuccessful
I0315 18:34:04.323] message:pod/busybox0 annotated
I0315 18:34:04.323] pod/busybox1 annotated
I0315 18:34:04.323] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:04.323] has:Object 'Kind' is missing
I0315 18:34:04.409] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:04.676] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0315 18:34:04.678] (BSuccessful
I0315 18:34:04.678] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0315 18:34:04.678] pod/busybox0 configured
I0315 18:34:04.678] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0315 18:34:04.678] pod/busybox1 configured
I0315 18:34:04.679] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0315 18:34:04.679] has:error validating data: kind not set
I0315 18:34:04.763] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:34:04.916] (Bdeployment.apps/nginx created
I0315 18:34:05.015] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0315 18:34:05.111] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 18:34:05.273] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0315 18:34:05.275] (BSuccessful
... skipping 42 lines ...
I0315 18:34:05.352] deployment.extensions "nginx" deleted
I0315 18:34:05.452] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:05.613] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:05.615] (BSuccessful
I0315 18:34:05.615] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0315 18:34:05.615] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0315 18:34:05.615] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:05.615] has:Object 'Kind' is missing
I0315 18:34:05.703] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:05.788] (BSuccessful
I0315 18:34:05.789] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:05.789] has:busybox0:busybox1:
I0315 18:34:05.791] Successful
I0315 18:34:05.792] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:05.792] has:Object 'Kind' is missing
I0315 18:34:05.878] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:05.970] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:06.061] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0315 18:34:06.063] (BSuccessful
I0315 18:34:06.063] message:pod/busybox0 labeled
I0315 18:34:06.064] pod/busybox1 labeled
I0315 18:34:06.064] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:06.064] has:Object 'Kind' is missing
I0315 18:34:06.156] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:06.244] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:06.334] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0315 18:34:06.336] (BSuccessful
I0315 18:34:06.336] message:pod/busybox0 patched
I0315 18:34:06.336] pod/busybox1 patched
I0315 18:34:06.337] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:06.337] has:Object 'Kind' is missing
I0315 18:34:06.427] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:06.601] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:34:06.603] (BSuccessful
I0315 18:34:06.603] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0315 18:34:06.603] pod "busybox0" force deleted
I0315 18:34:06.603] pod "busybox1" force deleted
I0315 18:34:06.604] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 18:34:06.604] has:Object 'Kind' is missing
I0315 18:34:06.689] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:34:06.844] (Breplicationcontroller/busybox0 created
I0315 18:34:06.849] replicationcontroller/busybox1 created
I0315 18:34:06.952] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:07.040] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:07.130] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0315 18:34:07.222] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0315 18:34:07.395] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0315 18:34:07.488] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0315 18:34:07.490] (BSuccessful
I0315 18:34:07.490] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0315 18:34:07.490] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0315 18:34:07.490] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:07.490] has:Object 'Kind' is missing
I0315 18:34:07.570] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0315 18:34:07.660] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0315 18:34:07.760] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:07.843] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0315 18:34:07.933] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0315 18:34:08.119] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0315 18:34:08.214] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0315 18:34:08.216] (BSuccessful
I0315 18:34:08.217] message:service/busybox0 exposed
I0315 18:34:08.217] service/busybox1 exposed
I0315 18:34:08.217] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:08.217] has:Object 'Kind' is missing
I0315 18:34:08.306] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:08.395] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0315 18:34:08.483] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0315 18:34:08.674] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0315 18:34:08.759] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0315 18:34:08.761] (BSuccessful
I0315 18:34:08.762] message:replicationcontroller/busybox0 scaled
I0315 18:34:08.762] replicationcontroller/busybox1 scaled
I0315 18:34:08.762] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:08.762] has:Object 'Kind' is missing
I0315 18:34:08.850] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 18:34:09.021] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:34:09.023] (BSuccessful
I0315 18:34:09.024] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0315 18:34:09.024] replicationcontroller "busybox0" force deleted
I0315 18:34:09.024] replicationcontroller "busybox1" force deleted
I0315 18:34:09.024] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:09.024] has:Object 'Kind' is missing
I0315 18:34:09.109] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:34:09.276] (Bdeployment.apps/nginx1-deployment created
I0315 18:34:09.280] deployment.apps/nginx0-deployment created
W0315 18:34:09.381] Error from server (NotFound): namespaces "non-native-resources" not found
W0315 18:34:09.381] I0315 18:34:01.784763   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:09.381] I0315 18:34:01.784954   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:09.382] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0315 18:34:09.382] I0315 18:34:02.470713   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674842-10030", Name:"test1", UID:"e7def241-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"973", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-848d5d4b47 to 1
W0315 18:34:09.382] I0315 18:34:02.477067   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674842-10030", Name:"test1-848d5d4b47", UID:"e7dfd7d9-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"974", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-frptd
W0315 18:34:09.382] I0315 18:34:02.785358   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
... skipping 10 lines ...
W0315 18:34:09.384] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0315 18:34:09.384] I0315 18:34:05.786787   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:09.384] I0315 18:34:05.786959   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:09.385] I0315 18:34:06.548964   58959 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0315 18:34:09.385] I0315 18:34:06.787187   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:09.385] I0315 18:34:06.787446   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:09.385] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0315 18:34:09.385] I0315 18:34:06.848976   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674842-30210", Name:"busybox0", UID:"ea7b40b3-4750-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-hl4hv
W0315 18:34:09.386] I0315 18:34:06.853257   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674842-30210", Name:"busybox1", UID:"ea7c169c-4750-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-m8ktj
W0315 18:34:09.386] I0315 18:34:07.787631   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:09.386] I0315 18:34:07.788096   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:09.386] I0315 18:34:08.576288   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674842-30210", Name:"busybox0", UID:"ea7b40b3-4750-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1051", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-vv2l6
W0315 18:34:09.386] I0315 18:34:08.585854   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674842-30210", Name:"busybox1", UID:"ea7c169c-4750-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1055", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jd6qr
W0315 18:34:09.387] I0315 18:34:08.788404   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:09.387] I0315 18:34:08.788614   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:09.387] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0315 18:34:09.387] I0315 18:34:09.280923   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674842-30210", Name:"nginx1-deployment", UID:"ebee42f2-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7c76c6cbb8 to 2
W0315 18:34:09.387] I0315 18:34:09.284730   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674842-30210", Name:"nginx1-deployment-7c76c6cbb8", UID:"ebeeff33-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-zhx8k
W0315 18:34:09.388] I0315 18:34:09.284875   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674842-30210", Name:"nginx0-deployment", UID:"ebef08c2-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-7bb85585d7 to 2
W0315 18:34:09.388] I0315 18:34:09.289029   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674842-30210", Name:"nginx0-deployment-7bb85585d7", UID:"ebefa7ce-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-hfv88
W0315 18:34:09.388] I0315 18:34:09.289063   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674842-30210", Name:"nginx1-deployment-7c76c6cbb8", UID:"ebeeff33-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-26rg8
W0315 18:34:09.388] I0315 18:34:09.292529   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674842-30210", Name:"nginx0-deployment-7bb85585d7", UID:"ebefa7ce-4750-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-kk8fv
I0315 18:34:09.489] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0315 18:34:09.489] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0315 18:34:09.670] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0315 18:34:09.673] (BSuccessful
I0315 18:34:09.673] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0315 18:34:09.673] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0315 18:34:09.673] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 18:34:09.673] has:Object 'Kind' is missing
I0315 18:34:09.765] deployment.apps/nginx1-deployment paused
I0315 18:34:09.771] deployment.apps/nginx0-deployment paused
W0315 18:34:09.872] I0315 18:34:09.788822   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:09.872] I0315 18:34:09.789364   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:34:09.973] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
... skipping 12 lines ...
I0315 18:34:10.224] 1         <none>
I0315 18:34:10.224] 
I0315 18:34:10.224] deployment.apps/nginx0-deployment 
I0315 18:34:10.224] REVISION  CHANGE-CAUSE
I0315 18:34:10.225] 1         <none>
I0315 18:34:10.225] 
I0315 18:34:10.225] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 18:34:10.225] has:nginx0-deployment
I0315 18:34:10.226] Successful
I0315 18:34:10.226] message:deployment.apps/nginx1-deployment 
I0315 18:34:10.226] REVISION  CHANGE-CAUSE
I0315 18:34:10.226] 1         <none>
I0315 18:34:10.226] 
I0315 18:34:10.227] deployment.apps/nginx0-deployment 
I0315 18:34:10.227] REVISION  CHANGE-CAUSE
I0315 18:34:10.227] 1         <none>
I0315 18:34:10.227] 
I0315 18:34:10.227] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 18:34:10.227] has:nginx1-deployment
I0315 18:34:10.228] Successful
I0315 18:34:10.229] message:deployment.apps/nginx1-deployment 
I0315 18:34:10.229] REVISION  CHANGE-CAUSE
I0315 18:34:10.229] 1         <none>
I0315 18:34:10.229] 
I0315 18:34:10.229] deployment.apps/nginx0-deployment 
I0315 18:34:10.229] REVISION  CHANGE-CAUSE
I0315 18:34:10.229] 1         <none>
I0315 18:34:10.229] 
I0315 18:34:10.229] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 18:34:10.230] has:Object 'Kind' is missing
I0315 18:34:10.305] deployment.apps "nginx1-deployment" force deleted
I0315 18:34:10.310] deployment.apps "nginx0-deployment" force deleted
W0315 18:34:10.411] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 18:34:10.411] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0315 18:34:10.790] I0315 18:34:10.789656   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:10.790] I0315 18:34:10.789881   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:34:11.405] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:34:11.568] (Breplicationcontroller/busybox0 created
I0315 18:34:11.573] replicationcontroller/busybox1 created
I0315 18:34:11.671] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
... skipping 6 lines ...
I0315 18:34:11.761] message:no rollbacker has been implemented for "ReplicationController"
I0315 18:34:11.761] no rollbacker has been implemented for "ReplicationController"
I0315 18:34:11.761] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:11.762] has:Object 'Kind' is missing
I0315 18:34:11.850] Successful
I0315 18:34:11.850] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:11.851] error: replicationcontrollers "busybox0" pausing is not supported
I0315 18:34:11.851] error: replicationcontrollers "busybox1" pausing is not supported
I0315 18:34:11.851] has:Object 'Kind' is missing
I0315 18:34:11.852] Successful
I0315 18:34:11.852] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:11.852] error: replicationcontrollers "busybox0" pausing is not supported
I0315 18:34:11.853] error: replicationcontrollers "busybox1" pausing is not supported
I0315 18:34:11.853] has:replicationcontrollers "busybox0" pausing is not supported
I0315 18:34:11.854] Successful
I0315 18:34:11.854] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:11.854] error: replicationcontrollers "busybox0" pausing is not supported
I0315 18:34:11.854] error: replicationcontrollers "busybox1" pausing is not supported
I0315 18:34:11.854] has:replicationcontrollers "busybox1" pausing is not supported
I0315 18:34:11.942] Successful
I0315 18:34:11.942] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:11.942] error: replicationcontrollers "busybox0" resuming is not supported
I0315 18:34:11.942] error: replicationcontrollers "busybox1" resuming is not supported
I0315 18:34:11.942] has:Object 'Kind' is missing
I0315 18:34:11.944] Successful
I0315 18:34:11.944] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:11.944] error: replicationcontrollers "busybox0" resuming is not supported
I0315 18:34:11.944] error: replicationcontrollers "busybox1" resuming is not supported
I0315 18:34:11.945] has:replicationcontrollers "busybox0" resuming is not supported
I0315 18:34:11.946] Successful
I0315 18:34:11.947] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 18:34:11.947] error: replicationcontrollers "busybox0" resuming is not supported
I0315 18:34:11.947] error: replicationcontrollers "busybox1" resuming is not supported
I0315 18:34:11.947] has:replicationcontrollers "busybox0" resuming is not supported
I0315 18:34:12.019] replicationcontroller "busybox0" force deleted
I0315 18:34:12.023] replicationcontroller "busybox1" force deleted
W0315 18:34:12.124] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0315 18:34:12.124] I0315 18:34:11.573056   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674842-30210", Name:"busybox0", UID:"ed4c1cc0-4750-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1121", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-pnzbm
W0315 18:34:12.124] I0315 18:34:11.577052   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674842-30210", Name:"busybox1", UID:"ed4ce527-4750-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1123", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-zrj2c
W0315 18:34:12.124] I0315 18:34:11.790221   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:12.125] I0315 18:34:11.790505   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:12.125] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 18:34:12.125] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0315 18:34:12.791] I0315 18:34:12.790806   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:12.791] I0315 18:34:12.791020   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:34:13.031] Recording: run_namespace_tests
I0315 18:34:13.031] Running command: run_namespace_tests
I0315 18:34:13.052] 
I0315 18:34:13.054] +++ Running case: test-cmd.run_namespace_tests 
... skipping 12 lines ...
W0315 18:34:16.793] I0315 18:34:16.792928   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:16.793] I0315 18:34:16.793128   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:17.794] I0315 18:34:17.793450   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:17.794] I0315 18:34:17.793687   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:34:18.422] namespace/my-namespace condition met
I0315 18:34:18.504] Successful
I0315 18:34:18.504] message:Error from server (NotFound): namespaces "my-namespace" not found
I0315 18:34:18.504] has: not found
I0315 18:34:18.602] core.sh:1336: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0315 18:34:18.670] (Bnamespace/other created
I0315 18:34:18.762] core.sh:1340: Successful get namespaces/other {{.metadata.name}}: other
I0315 18:34:18.849] (Bcore.sh:1344: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:34:19.005] (Bpod/valid-pod created
I0315 18:34:19.103] core.sh:1348: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 18:34:19.199] (Bcore.sh:1350: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 18:34:19.273] (BSuccessful
I0315 18:34:19.273] message:error: a resource cannot be retrieved by name across all namespaces
I0315 18:34:19.274] has:a resource cannot be retrieved by name across all namespaces
I0315 18:34:19.360] core.sh:1357: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 18:34:19.436] (Bpod "valid-pod" force deleted
I0315 18:34:19.529] core.sh:1361: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:34:19.600] (Bnamespace "other" deleted
W0315 18:34:19.700] I0315 18:34:18.793964   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:19.701] I0315 18:34:18.794253   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:19.701] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 18:34:19.795] I0315 18:34:19.794460   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:19.795] I0315 18:34:19.794674   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:20.414] E0315 18:34:20.413319   58959 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0315 18:34:20.490] I0315 18:34:20.489371   58959 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
W0315 18:34:20.590] I0315 18:34:20.589687   58959 controller_utils.go:1034] Caches are synced for garbage collector controller
W0315 18:34:20.795] I0315 18:34:20.794922   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:20.795] I0315 18:34:20.795151   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:21.796] I0315 18:34:21.795474   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:21.796] I0315 18:34:21.795682   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
... skipping 151 lines ...
I0315 18:34:40.182] +++ command: run_client_config_tests
I0315 18:34:40.196] +++ [0315 18:34:40] Creating namespace namespace-1552674880-6910
I0315 18:34:40.264] namespace/namespace-1552674880-6910 created
I0315 18:34:40.333] Context "test" modified.
I0315 18:34:40.340] +++ [0315 18:34:40] Testing client config
I0315 18:34:40.407] Successful
I0315 18:34:40.407] message:error: stat missing: no such file or directory
I0315 18:34:40.407] has:missing: no such file or directory
I0315 18:34:40.473] Successful
I0315 18:34:40.474] message:error: stat missing: no such file or directory
I0315 18:34:40.474] has:missing: no such file or directory
I0315 18:34:40.542] Successful
I0315 18:34:40.542] message:error: stat missing: no such file or directory
I0315 18:34:40.542] has:missing: no such file or directory
I0315 18:34:40.613] Successful
I0315 18:34:40.613] message:Error in configuration: context was not found for specified context: missing-context
I0315 18:34:40.613] has:context was not found for specified context: missing-context
I0315 18:34:40.683] Successful
I0315 18:34:40.683] message:error: no server found for cluster "missing-cluster"
I0315 18:34:40.684] has:no server found for cluster "missing-cluster"
I0315 18:34:40.754] Successful
I0315 18:34:40.754] message:error: auth info "missing-user" does not exist
I0315 18:34:40.754] has:auth info "missing-user" does not exist
W0315 18:34:40.854] I0315 18:34:40.805886   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:40.855] I0315 18:34:40.806092   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:34:40.955] Successful
I0315 18:34:40.956] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0315 18:34:40.956] has:Error loading config file
I0315 18:34:40.957] Successful
I0315 18:34:40.957] message:error: stat missing-config: no such file or directory
I0315 18:34:40.957] has:no such file or directory
I0315 18:34:40.973] +++ exit code: 0
I0315 18:34:41.027] Recording: run_service_accounts_tests
I0315 18:34:41.027] Running command: run_service_accounts_tests
I0315 18:34:41.050] 
I0315 18:34:41.052] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 46 lines ...
I0315 18:34:47.842] Labels:                        run=pi
I0315 18:34:47.842] Annotations:                   <none>
I0315 18:34:47.842] Schedule:                      59 23 31 2 *
I0315 18:34:47.842] Concurrency Policy:            Allow
I0315 18:34:47.842] Suspend:                       False
I0315 18:34:47.842] Successful Job History Limit:  824643331832
I0315 18:34:47.842] Failed Job History Limit:      1
I0315 18:34:47.842] Starting Deadline Seconds:     <unset>
I0315 18:34:47.843] Selector:                      <unset>
I0315 18:34:47.843] Parallelism:                   <unset>
I0315 18:34:47.843] Completions:                   <unset>
I0315 18:34:47.843] Pod Template:
I0315 18:34:47.843]   Labels:  run=pi
... skipping 31 lines ...
I0315 18:34:48.373]                 job-name=test-job
I0315 18:34:48.373]                 run=pi
I0315 18:34:48.373] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0315 18:34:48.373] Parallelism:    1
I0315 18:34:48.373] Completions:    1
I0315 18:34:48.373] Start Time:     Fri, 15 Mar 2019 18:34:48 +0000
I0315 18:34:48.373] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0315 18:34:48.373] Pod Template:
I0315 18:34:48.373]   Labels:  controller-uid=031247ab-4751-11e9-ab52-0242ac110002
I0315 18:34:48.373]            job-name=test-job
I0315 18:34:48.373]            run=pi
I0315 18:34:48.373]   Containers:
I0315 18:34:48.374]    pi:
... skipping 411 lines ...
I0315 18:34:58.167]   sessionAffinity: None
I0315 18:34:58.167]   type: ClusterIP
I0315 18:34:58.167] status:
I0315 18:34:58.167]   loadBalancer: {}
W0315 18:34:58.268] I0315 18:34:57.814887   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:34:58.268] I0315 18:34:57.815105   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:34:58.268] error: you must specify resources by --filename when --local is set.
W0315 18:34:58.269] Example resource specifications include:
W0315 18:34:58.269]    '-f rsrc.yaml'
W0315 18:34:58.269]    '--filename=rsrc.json'
I0315 18:34:58.369] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0315 18:34:58.501] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0315 18:34:58.581] (Bservice "redis-master" deleted
... skipping 113 lines ...
I0315 18:35:05.354] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 18:35:05.446] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0315 18:35:05.547] (Bdaemonset.extensions/bind rolled back
I0315 18:35:05.644] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0315 18:35:05.735] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 18:35:05.841] (BSuccessful
I0315 18:35:05.842] message:error: unable to find specified revision 1000000 in history
I0315 18:35:05.842] has:unable to find specified revision
W0315 18:35:05.942] I0315 18:35:01.816792   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:05.943] I0315 18:35:01.816994   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:05.943] I0315 18:35:02.817300   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:05.943] I0315 18:35:02.817454   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:05.943] I0315 18:35:03.112274   55641 controller.go:606] quota admission added evaluator for: daemonsets.extensions
... skipping 33 lines ...
I0315 18:35:07.700] Namespace:    namespace-1552674906-28513
I0315 18:35:07.700] Selector:     app=guestbook,tier=frontend
I0315 18:35:07.700] Labels:       app=guestbook
I0315 18:35:07.700]               tier=frontend
I0315 18:35:07.700] Annotations:  <none>
I0315 18:35:07.700] Replicas:     3 current / 3 desired
I0315 18:35:07.701] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:07.701] Pod Template:
I0315 18:35:07.701]   Labels:  app=guestbook
I0315 18:35:07.701]            tier=frontend
I0315 18:35:07.701]   Containers:
I0315 18:35:07.701]    php-redis:
I0315 18:35:07.701]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0315 18:35:07.814] Namespace:    namespace-1552674906-28513
I0315 18:35:07.814] Selector:     app=guestbook,tier=frontend
I0315 18:35:07.814] Labels:       app=guestbook
I0315 18:35:07.814]               tier=frontend
I0315 18:35:07.815] Annotations:  <none>
I0315 18:35:07.815] Replicas:     3 current / 3 desired
I0315 18:35:07.815] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:07.815] Pod Template:
I0315 18:35:07.815]   Labels:  app=guestbook
I0315 18:35:07.815]            tier=frontend
I0315 18:35:07.815]   Containers:
I0315 18:35:07.815]    php-redis:
I0315 18:35:07.816]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0315 18:35:07.917] Namespace:    namespace-1552674906-28513
I0315 18:35:07.917] Selector:     app=guestbook,tier=frontend
I0315 18:35:07.917] Labels:       app=guestbook
I0315 18:35:07.917]               tier=frontend
I0315 18:35:07.917] Annotations:  <none>
I0315 18:35:07.917] Replicas:     3 current / 3 desired
I0315 18:35:07.917] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:07.918] Pod Template:
I0315 18:35:07.918]   Labels:  app=guestbook
I0315 18:35:07.918]            tier=frontend
I0315 18:35:07.918]   Containers:
I0315 18:35:07.918]    php-redis:
I0315 18:35:07.918]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0315 18:35:08.122] Namespace:    namespace-1552674906-28513
I0315 18:35:08.122] Selector:     app=guestbook,tier=frontend
I0315 18:35:08.122] Labels:       app=guestbook
I0315 18:35:08.122]               tier=frontend
I0315 18:35:08.122] Annotations:  <none>
I0315 18:35:08.122] Replicas:     3 current / 3 desired
I0315 18:35:08.122] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:08.123] Pod Template:
I0315 18:35:08.123]   Labels:  app=guestbook
I0315 18:35:08.123]            tier=frontend
I0315 18:35:08.123]   Containers:
I0315 18:35:08.123]    php-redis:
I0315 18:35:08.123]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0315 18:35:08.176] Namespace:    namespace-1552674906-28513
I0315 18:35:08.176] Selector:     app=guestbook,tier=frontend
I0315 18:35:08.177] Labels:       app=guestbook
I0315 18:35:08.177]               tier=frontend
I0315 18:35:08.177] Annotations:  <none>
I0315 18:35:08.177] Replicas:     3 current / 3 desired
I0315 18:35:08.177] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:08.177] Pod Template:
I0315 18:35:08.177]   Labels:  app=guestbook
I0315 18:35:08.177]            tier=frontend
I0315 18:35:08.177]   Containers:
I0315 18:35:08.177]    php-redis:
I0315 18:35:08.177]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0315 18:35:08.286] Namespace:    namespace-1552674906-28513
I0315 18:35:08.286] Selector:     app=guestbook,tier=frontend
I0315 18:35:08.286] Labels:       app=guestbook
I0315 18:35:08.286]               tier=frontend
I0315 18:35:08.286] Annotations:  <none>
I0315 18:35:08.286] Replicas:     3 current / 3 desired
I0315 18:35:08.286] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:08.286] Pod Template:
I0315 18:35:08.286]   Labels:  app=guestbook
I0315 18:35:08.286]            tier=frontend
I0315 18:35:08.286]   Containers:
I0315 18:35:08.286]    php-redis:
I0315 18:35:08.287]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0315 18:35:08.396] Namespace:    namespace-1552674906-28513
I0315 18:35:08.396] Selector:     app=guestbook,tier=frontend
I0315 18:35:08.396] Labels:       app=guestbook
I0315 18:35:08.396]               tier=frontend
I0315 18:35:08.396] Annotations:  <none>
I0315 18:35:08.396] Replicas:     3 current / 3 desired
I0315 18:35:08.396] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:08.396] Pod Template:
I0315 18:35:08.396]   Labels:  app=guestbook
I0315 18:35:08.396]            tier=frontend
I0315 18:35:08.396]   Containers:
I0315 18:35:08.396]    php-redis:
I0315 18:35:08.397]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0315 18:35:08.501] Namespace:    namespace-1552674906-28513
I0315 18:35:08.501] Selector:     app=guestbook,tier=frontend
I0315 18:35:08.501] Labels:       app=guestbook
I0315 18:35:08.501]               tier=frontend
I0315 18:35:08.502] Annotations:  <none>
I0315 18:35:08.502] Replicas:     3 current / 3 desired
I0315 18:35:08.502] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:08.502] Pod Template:
I0315 18:35:08.502]   Labels:  app=guestbook
I0315 18:35:08.502]            tier=frontend
I0315 18:35:08.502]   Containers:
I0315 18:35:08.502]    php-redis:
I0315 18:35:08.502]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 24 lines ...
I0315 18:35:09.483] (Breplicationcontroller/frontend scaled
I0315 18:35:09.582] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0315 18:35:09.656] (Breplicationcontroller "frontend" deleted
W0315 18:35:09.757] I0315 18:35:08.682329   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674906-28513", Name:"frontend", UID:"0e9b61df-4751-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1408", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-zf4lg
W0315 18:35:09.758] I0315 18:35:08.820382   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:09.758] I0315 18:35:08.820609   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:09.758] error: Expected replicas to be 3, was 2
W0315 18:35:09.758] I0315 18:35:09.218504   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674906-28513", Name:"frontend", UID:"0e9b61df-4751-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hcf98
W0315 18:35:09.758] I0315 18:35:09.489710   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674906-28513", Name:"frontend", UID:"0e9b61df-4751-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-hcf98
W0315 18:35:09.821] I0315 18:35:09.820950   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:09.821] I0315 18:35:09.821112   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:09.827] I0315 18:35:09.826708   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674906-28513", Name:"redis-master", UID:"1004b01e-4751-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-btm7k
I0315 18:35:09.928] replicationcontroller/redis-master created
... skipping 40 lines ...
I0315 18:35:11.573] service "expose-test-deployment" deleted
I0315 18:35:11.674] Successful
I0315 18:35:11.674] message:service/expose-test-deployment exposed
I0315 18:35:11.675] has:service/expose-test-deployment exposed
I0315 18:35:11.752] service "expose-test-deployment" deleted
I0315 18:35:11.841] Successful
I0315 18:35:11.841] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0315 18:35:11.841] See 'kubectl expose -h' for help and examples
I0315 18:35:11.841] has:invalid deployment: no selectors
I0315 18:35:11.924] Successful
I0315 18:35:11.924] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0315 18:35:11.925] See 'kubectl expose -h' for help and examples
I0315 18:35:11.925] has:invalid deployment: no selectors
W0315 18:35:12.025] I0315 18:35:11.821865   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:12.025] I0315 18:35:11.822040   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:12.090] I0315 18:35:12.089605   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment", UID:"115e0b8f-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1536", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64bb598779 to 3
W0315 18:35:12.096] I0315 18:35:12.095262   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-64bb598779", UID:"115ed5a7-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1537", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-2fhwk
... skipping 22 lines ...
I0315 18:35:13.944] service "frontend" deleted
I0315 18:35:13.951] service "frontend-2" deleted
I0315 18:35:13.958] service "frontend-3" deleted
I0315 18:35:13.965] service "frontend-4" deleted
I0315 18:35:13.972] service "frontend-5" deleted
I0315 18:35:14.069] Successful
I0315 18:35:14.069] message:error: cannot expose a Node
I0315 18:35:14.069] has:cannot expose
I0315 18:35:14.158] Successful
I0315 18:35:14.159] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0315 18:35:14.159] has:metadata.name: Invalid value
I0315 18:35:14.250] Successful
I0315 18:35:14.250] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 41 lines ...
I0315 18:35:16.669] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0315 18:35:16.769] I0315 18:35:15.823752   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:16.770] I0315 18:35:15.823948   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:16.770] I0315 18:35:16.059826   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674906-28513", Name:"frontend", UID:"13bbc0a3-4751-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7jpjg
W0315 18:35:16.771] I0315 18:35:16.064516   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674906-28513", Name:"frontend", UID:"13bbc0a3-4751-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9k5jr
W0315 18:35:16.771] I0315 18:35:16.065185   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674906-28513", Name:"frontend", UID:"13bbc0a3-4751-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r6wgg
W0315 18:35:16.771] Error: required flag(s) "max" not set
W0315 18:35:16.771] 
W0315 18:35:16.771] 
W0315 18:35:16.771] Examples:
W0315 18:35:16.771]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0315 18:35:16.772]   kubectl autoscale deployment foo --min=2 --max=10
W0315 18:35:16.772]   
... skipping 57 lines ...
I0315 18:35:16.986]           limits:
I0315 18:35:16.986]             cpu: 300m
I0315 18:35:16.986]           requests:
I0315 18:35:16.986]             cpu: 300m
I0315 18:35:16.986]       terminationGracePeriodSeconds: 0
I0315 18:35:16.987] status: {}
W0315 18:35:17.087] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0315 18:35:17.236] deployment.apps/nginx-deployment-resources created
W0315 18:35:17.337] I0315 18:35:17.242604   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources", UID:"14701b67-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1676", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-695c766d58 to 3
W0315 18:35:17.337] I0315 18:35:17.247720   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources-695c766d58", UID:"14710e82-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-4gmvr
W0315 18:35:17.338] I0315 18:35:17.252258   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources-695c766d58", UID:"14710e82-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-v7hkz
W0315 18:35:17.338] I0315 18:35:17.252596   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources-695c766d58", UID:"14710e82-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-9p5g6
I0315 18:35:17.438] core.sh:1278: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 4 lines ...
I0315 18:35:17.854] (Bcore.sh:1284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0315 18:35:18.043] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0315 18:35:18.144] I0315 18:35:17.656450   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources", UID:"14701b67-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5b7fc6dd8b to 1
W0315 18:35:18.145] I0315 18:35:17.661616   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"14b02181-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5b7fc6dd8b-65595
W0315 18:35:18.145] I0315 18:35:17.824581   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:18.145] I0315 18:35:17.825377   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:18.145] error: unable to find container named redis
W0315 18:35:18.146] I0315 18:35:18.069557   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources", UID:"14701b67-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5b7fc6dd8b to 0
W0315 18:35:18.146] I0315 18:35:18.076994   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"14b02181-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1704", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5b7fc6dd8b-65595
W0315 18:35:18.146] I0315 18:35:18.098765   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources", UID:"14701b67-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1703", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6bc4567bf6 to 1
W0315 18:35:18.147] I0315 18:35:18.101129   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674906-28513", Name:"nginx-deployment-resources-6bc4567bf6", UID:"14ec2fe0-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1711", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6bc4567bf6-wdqhm
I0315 18:35:18.247] core.sh:1289: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0315 18:35:18.263] (Bcore.sh:1290: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 224 lines ...
I0315 18:35:18.794]   observedGeneration: 4
I0315 18:35:18.794]   replicas: 4
I0315 18:35:18.794]   unavailableReplicas: 4
I0315 18:35:18.794]   updatedReplicas: 1
W0315 18:35:18.894] I0315 18:35:18.825616   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:18.895] I0315 18:35:18.825867   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:18.895] error: you must specify resources by --filename when --local is set.
W0315 18:35:18.896] Example resource specifications include:
W0315 18:35:18.896]    '-f rsrc.yaml'
W0315 18:35:18.896]    '--filename=rsrc.json'
I0315 18:35:18.996] core.sh:1299: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0315 18:35:19.046] (Bcore.sh:1300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0315 18:35:19.139] (Bcore.sh:1301: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
I0315 18:35:20.653]                 pod-template-hash=7875bf5c8b
I0315 18:35:20.653] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0315 18:35:20.654]                 deployment.kubernetes.io/max-replicas: 2
I0315 18:35:20.654]                 deployment.kubernetes.io/revision: 1
I0315 18:35:20.654] Controlled By:  Deployment/test-nginx-apps
I0315 18:35:20.654] Replicas:       1 current / 1 desired
I0315 18:35:20.654] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:20.654] Pod Template:
I0315 18:35:20.654]   Labels:  app=test-nginx-apps
I0315 18:35:20.655]            pod-template-hash=7875bf5c8b
I0315 18:35:20.655]   Containers:
I0315 18:35:20.655]    nginx:
I0315 18:35:20.655]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 103 lines ...
W0315 18:35:25.009] I0315 18:35:24.829057   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:25.830] I0315 18:35:25.829162   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:25.830] I0315 18:35:25.829423   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:35:26.006] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 18:35:26.202] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 18:35:26.311] (Bdeployment.extensions/nginx rolled back
W0315 18:35:26.412] error: unable to find specified revision 1000000 in history
W0315 18:35:26.830] I0315 18:35:26.829668   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:26.830] I0315 18:35:26.829900   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:35:27.423] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0315 18:35:27.524] (Bdeployment.extensions/nginx paused
W0315 18:35:27.644] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0315 18:35:27.745] deployment.extensions/nginx resumed
W0315 18:35:27.846] I0315 18:35:27.830125   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:27.846] I0315 18:35:27.830337   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:35:27.947] deployment.extensions/nginx rolled back
I0315 18:35:28.066]     deployment.kubernetes.io/revision-history: 1,3
W0315 18:35:28.253] error: desired revision (3) is different from the running revision (5)
I0315 18:35:28.427] deployment.apps/nginx2 created
I0315 18:35:28.515] deployment.extensions "nginx2" deleted
I0315 18:35:28.602] deployment.extensions "nginx" deleted
I0315 18:35:28.696] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:35:28.858] (Bdeployment.apps/nginx-deployment created
W0315 18:35:28.959] I0315 18:35:28.434271   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674919-2533", Name:"nginx2", UID:"1b1bd1df-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1923", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-78cb9c866 to 3
... skipping 20 lines ...
I0315 18:35:30.138] (Bapps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0315 18:35:30.302] (Bapps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0315 18:35:30.391] (Bapps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0315 18:35:30.483] (Bdeployment.extensions/nginx-deployment image updated
W0315 18:35:30.583] I0315 18:35:29.289228   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674919-2533", Name:"nginx-deployment", UID:"1b5d65ef-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1970", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5bfd55c857 to 1
W0315 18:35:30.584] I0315 18:35:29.294462   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674919-2533", Name:"nginx-deployment-5bfd55c857", UID:"1b9f3108-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1971", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5bfd55c857-2jv64
W0315 18:35:30.584] error: unable to find container named "redis"
W0315 18:35:30.584] I0315 18:35:29.831038   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:30.584] I0315 18:35:29.831529   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:30.585] I0315 18:35:30.505103   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674919-2533", Name:"nginx-deployment", UID:"1b5d65ef-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-79b6f6d8f5 to 2
W0315 18:35:30.585] I0315 18:35:30.510423   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674919-2533", Name:"nginx-deployment-79b6f6d8f5", UID:"1b5e4979-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-79b6f6d8f5-gjjn8
W0315 18:35:30.585] I0315 18:35:30.523597   58959 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552674919-2533", Name:"nginx-deployment", UID:"1b5d65ef-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1991", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6c69c955c7 to 1
W0315 18:35:30.585] I0315 18:35:30.533431   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674919-2533", Name:"nginx-deployment-6c69c955c7", UID:"1c564b37-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1998", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6c69c955c7-s8kmr
... skipping 47 lines ...
I0315 18:35:32.974] deployment.extensions/nginx-deployment env updated
I0315 18:35:33.033] deployment.extensions/nginx-deployment env updated
I0315 18:35:33.118] deployment.extensions "nginx-deployment" deleted
I0315 18:35:33.209] configmap "test-set-env-config" deleted
I0315 18:35:33.287] secret "test-set-env-secret" deleted
I0315 18:35:33.309] +++ exit code: 0
W0315 18:35:33.410] E0315 18:35:33.185129   58959 replica_set.go:450] Sync "namespace-1552674919-2533/nginx-deployment-76c5fccf8b" failed with replicasets.apps "nginx-deployment-76c5fccf8b" not found
W0315 18:35:33.410] I0315 18:35:33.235688   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552674919-2533", Name:"nginx-deployment-79b6f6d8f5", UID:"1ccda92b-4751-11e9-ab52-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2107", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-79b6f6d8f5-2f5f4
W0315 18:35:33.411] E0315 18:35:33.336404   58959 replica_set.go:450] Sync "namespace-1552674919-2533/nginx-deployment-5b4bdf69f4" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5b4bdf69f4": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1552674919-2533/nginx-deployment-5b4bdf69f4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1db98f83-4751-11e9-ab52-0242ac110002, UID in object meta: 
W0315 18:35:33.486] E0315 18:35:33.485735   58959 replica_set.go:450] Sync "namespace-1552674919-2533/nginx-deployment-5cc58864fb" failed with replicasets.apps "nginx-deployment-5cc58864fb" not found
W0315 18:35:33.536] E0315 18:35:33.535435   58959 replica_set.go:450] Sync "namespace-1552674919-2533/nginx-deployment-79b6f6d8f5" failed with replicasets.apps "nginx-deployment-79b6f6d8f5" not found
I0315 18:35:33.636] Recording: run_rs_tests
I0315 18:35:33.636] Running command: run_rs_tests
I0315 18:35:33.637] 
I0315 18:35:33.637] +++ Running case: test-cmd.run_rs_tests 
I0315 18:35:33.637] +++ working dir: /go/src/k8s.io/kubernetes
I0315 18:35:33.637] +++ command: run_rs_tests
... skipping 33 lines ...
I0315 18:35:35.421] Namespace:    namespace-1552674933-8231
I0315 18:35:35.421] Selector:     app=guestbook,tier=frontend
I0315 18:35:35.421] Labels:       app=guestbook
I0315 18:35:35.421]               tier=frontend
I0315 18:35:35.421] Annotations:  <none>
I0315 18:35:35.421] Replicas:     3 current / 3 desired
I0315 18:35:35.421] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:35.422] Pod Template:
I0315 18:35:35.422]   Labels:  app=guestbook
I0315 18:35:35.422]            tier=frontend
I0315 18:35:35.422]   Containers:
I0315 18:35:35.422]    php-redis:
I0315 18:35:35.422]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0315 18:35:35.531] Namespace:    namespace-1552674933-8231
I0315 18:35:35.531] Selector:     app=guestbook,tier=frontend
I0315 18:35:35.531] Labels:       app=guestbook
I0315 18:35:35.531]               tier=frontend
I0315 18:35:35.532] Annotations:  <none>
I0315 18:35:35.532] Replicas:     3 current / 3 desired
I0315 18:35:35.532] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:35.532] Pod Template:
I0315 18:35:35.532]   Labels:  app=guestbook
I0315 18:35:35.532]            tier=frontend
I0315 18:35:35.532]   Containers:
I0315 18:35:35.532]    php-redis:
I0315 18:35:35.532]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0315 18:35:35.639] Namespace:    namespace-1552674933-8231
I0315 18:35:35.639] Selector:     app=guestbook,tier=frontend
I0315 18:35:35.639] Labels:       app=guestbook
I0315 18:35:35.639]               tier=frontend
I0315 18:35:35.639] Annotations:  <none>
I0315 18:35:35.640] Replicas:     3 current / 3 desired
I0315 18:35:35.640] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:35.640] Pod Template:
I0315 18:35:35.640]   Labels:  app=guestbook
I0315 18:35:35.640]            tier=frontend
I0315 18:35:35.640]   Containers:
I0315 18:35:35.640]    php-redis:
I0315 18:35:35.640]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 19 lines ...
I0315 18:35:35.936] Namespace:    namespace-1552674933-8231
I0315 18:35:35.936] Selector:     app=guestbook,tier=frontend
I0315 18:35:35.936] Labels:       app=guestbook
I0315 18:35:35.936]               tier=frontend
I0315 18:35:35.936] Annotations:  <none>
I0315 18:35:35.936] Replicas:     3 current / 3 desired
I0315 18:35:35.936] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:35.936] Pod Template:
I0315 18:35:35.936]   Labels:  app=guestbook
I0315 18:35:35.936]            tier=frontend
I0315 18:35:35.936]   Containers:
I0315 18:35:35.936]    php-redis:
I0315 18:35:35.937]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0315 18:35:35.938] Namespace:    namespace-1552674933-8231
I0315 18:35:35.938] Selector:     app=guestbook,tier=frontend
I0315 18:35:35.938] Labels:       app=guestbook
I0315 18:35:35.938]               tier=frontend
I0315 18:35:35.938] Annotations:  <none>
I0315 18:35:35.939] Replicas:     3 current / 3 desired
I0315 18:35:35.939] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:35.939] Pod Template:
I0315 18:35:35.939]   Labels:  app=guestbook
I0315 18:35:35.939]            tier=frontend
I0315 18:35:35.939]   Containers:
I0315 18:35:35.939]    php-redis:
I0315 18:35:35.939]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0315 18:35:36.004] Namespace:    namespace-1552674933-8231
I0315 18:35:36.004] Selector:     app=guestbook,tier=frontend
I0315 18:35:36.004] Labels:       app=guestbook
I0315 18:35:36.004]               tier=frontend
I0315 18:35:36.004] Annotations:  <none>
I0315 18:35:36.004] Replicas:     3 current / 3 desired
I0315 18:35:36.004] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:36.005] Pod Template:
I0315 18:35:36.005]   Labels:  app=guestbook
I0315 18:35:36.005]            tier=frontend
I0315 18:35:36.005]   Containers:
I0315 18:35:36.005]    php-redis:
I0315 18:35:36.005]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0315 18:35:36.109] Namespace:    namespace-1552674933-8231
I0315 18:35:36.109] Selector:     app=guestbook,tier=frontend
I0315 18:35:36.109] Labels:       app=guestbook
I0315 18:35:36.109]               tier=frontend
I0315 18:35:36.109] Annotations:  <none>
I0315 18:35:36.110] Replicas:     3 current / 3 desired
I0315 18:35:36.110] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:36.110] Pod Template:
I0315 18:35:36.110]   Labels:  app=guestbook
I0315 18:35:36.110]            tier=frontend
I0315 18:35:36.110]   Containers:
I0315 18:35:36.110]    php-redis:
I0315 18:35:36.110]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0315 18:35:36.224] Namespace:    namespace-1552674933-8231
I0315 18:35:36.224] Selector:     app=guestbook,tier=frontend
I0315 18:35:36.224] Labels:       app=guestbook
I0315 18:35:36.224]               tier=frontend
I0315 18:35:36.224] Annotations:  <none>
I0315 18:35:36.224] Replicas:     3 current / 3 desired
I0315 18:35:36.224] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:36.224] Pod Template:
I0315 18:35:36.224]   Labels:  app=guestbook
I0315 18:35:36.224]            tier=frontend
I0315 18:35:36.224]   Containers:
I0315 18:35:36.224]    php-redis:
I0315 18:35:36.225]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 194 lines ...
I0315 18:35:41.263] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0315 18:35:41.349] apps.sh:643: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0315 18:35:41.423] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0315 18:35:41.508] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0315 18:35:41.598] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0315 18:35:41.672] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0315 18:35:41.772] Error: required flag(s) "max" not set
W0315 18:35:41.773] 
W0315 18:35:41.773] 
W0315 18:35:41.773] Examples:
W0315 18:35:41.773]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0315 18:35:41.773]   kubectl autoscale deployment foo --min=2 --max=10
W0315 18:35:41.773]   
... skipping 93 lines ...
I0315 18:35:44.839] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0315 18:35:44.926] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0315 18:35:45.032] (Bstatefulset.apps/nginx rolled back
I0315 18:35:45.128] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0315 18:35:45.221] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 18:35:45.326] (BSuccessful
I0315 18:35:45.326] message:error: unable to find specified revision 1000000 in history
I0315 18:35:45.327] has:unable to find specified revision
I0315 18:35:45.419] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0315 18:35:45.512] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 18:35:45.625] (Bstatefulset.apps/nginx rolled back
I0315 18:35:45.721] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0315 18:35:45.810] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 66 lines ...
I0315 18:35:47.716] Name:         mock
I0315 18:35:47.716] Namespace:    namespace-1552674946-27591
I0315 18:35:47.717] Selector:     app=mock
I0315 18:35:47.717] Labels:       app=mock
I0315 18:35:47.717] Annotations:  <none>
I0315 18:35:47.717] Replicas:     1 current / 1 desired
I0315 18:35:47.717] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:47.717] Pod Template:
I0315 18:35:47.717]   Labels:  app=mock
I0315 18:35:47.717]   Containers:
I0315 18:35:47.717]    mock-container:
I0315 18:35:47.718]     Image:        k8s.gcr.io/pause:2.0
I0315 18:35:47.718]     Port:         9949/TCP
... skipping 62 lines ...
I0315 18:35:49.944] Name:         mock
I0315 18:35:49.944] Namespace:    namespace-1552674946-27591
I0315 18:35:49.944] Selector:     app=mock
I0315 18:35:49.944] Labels:       app=mock
I0315 18:35:49.944] Annotations:  <none>
I0315 18:35:49.945] Replicas:     1 current / 1 desired
I0315 18:35:49.945] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:49.945] Pod Template:
I0315 18:35:49.945]   Labels:  app=mock
I0315 18:35:49.945]   Containers:
I0315 18:35:49.945]    mock-container:
I0315 18:35:49.945]     Image:        k8s.gcr.io/pause:2.0
I0315 18:35:49.945]     Port:         9949/TCP
... skipping 60 lines ...
I0315 18:35:52.146] Name:         mock
I0315 18:35:52.146] Namespace:    namespace-1552674946-27591
I0315 18:35:52.146] Selector:     app=mock
I0315 18:35:52.146] Labels:       app=mock
I0315 18:35:52.146] Annotations:  <none>
I0315 18:35:52.147] Replicas:     1 current / 1 desired
I0315 18:35:52.147] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:52.147] Pod Template:
I0315 18:35:52.147]   Labels:  app=mock
I0315 18:35:52.147]   Containers:
I0315 18:35:52.147]    mock-container:
I0315 18:35:52.147]     Image:        k8s.gcr.io/pause:2.0
I0315 18:35:52.147]     Port:         9949/TCP
... skipping 46 lines ...
I0315 18:35:54.242] Namespace:    namespace-1552674946-27591
I0315 18:35:54.242] Selector:     app=mock
I0315 18:35:54.242] Labels:       app=mock
I0315 18:35:54.242]               status=replaced
I0315 18:35:54.242] Annotations:  <none>
I0315 18:35:54.242] Replicas:     1 current / 1 desired
I0315 18:35:54.242] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:54.242] Pod Template:
I0315 18:35:54.242]   Labels:  app=mock
I0315 18:35:54.242]   Containers:
I0315 18:35:54.242]    mock-container:
I0315 18:35:54.242]     Image:        k8s.gcr.io/pause:2.0
I0315 18:35:54.243]     Port:         9949/TCP
... skipping 11 lines ...
I0315 18:35:54.244] Namespace:    namespace-1552674946-27591
I0315 18:35:54.244] Selector:     app=mock2
I0315 18:35:54.244] Labels:       app=mock2
I0315 18:35:54.244]               status=replaced
I0315 18:35:54.244] Annotations:  <none>
I0315 18:35:54.244] Replicas:     1 current / 1 desired
I0315 18:35:54.244] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 18:35:54.244] Pod Template:
I0315 18:35:54.244]   Labels:  app=mock2
I0315 18:35:54.244]   Containers:
I0315 18:35:54.244]    mock-container:
I0315 18:35:54.244]     Image:        k8s.gcr.io/pause:2.0
I0315 18:35:54.244]     Port:         9949/TCP
... skipping 122 lines ...
W0315 18:35:59.588] I0315 18:35:58.845646   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:35:59.688] persistentvolume/pv0002 created
I0315 18:35:59.752] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0315 18:35:59.824] (Bpersistentvolume "pv0002" deleted
W0315 18:35:59.925] I0315 18:35:59.845973   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:35:59.925] I0315 18:35:59.846170   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:35:59.987] E0315 18:35:59.986970   58959 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0315 18:36:00.088] persistentvolume/pv0003 created
I0315 18:36:00.088] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0315 18:36:00.155] (Bpersistentvolume "pv0003" deleted
I0315 18:36:00.251] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 18:36:00.267] (B+++ exit code: 0
I0315 18:36:00.329] Recording: run_persistent_volume_claims_tests
... skipping 493 lines ...
I0315 18:36:04.896] yes
I0315 18:36:04.896] has:the server doesn't have a resource type
I0315 18:36:04.972] Successful
I0315 18:36:05.010] message:yes
I0315 18:36:05.011] has:yes
I0315 18:36:05.043] Successful
I0315 18:36:05.044] message:error: --subresource can not be used with NonResourceURL
I0315 18:36:05.044] has:subresource can not be used with NonResourceURL
I0315 18:36:05.121] Successful
I0315 18:36:05.200] Successful
I0315 18:36:05.200] message:yes
I0315 18:36:05.200] 0
I0315 18:36:05.200] has:0
... skipping 6 lines ...
I0315 18:36:05.375] role.rbac.authorization.k8s.io/testing-R reconciled
I0315 18:36:05.465] legacy-script.sh:769: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0315 18:36:05.550] (Blegacy-script.sh:770: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0315 18:36:05.639] (Blegacy-script.sh:771: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0315 18:36:05.732] (Blegacy-script.sh:772: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0315 18:36:05.807] (BSuccessful
I0315 18:36:05.808] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0315 18:36:05.808] has:only rbac.authorization.k8s.io/v1 is supported
I0315 18:36:05.889] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0315 18:36:05.894] role.rbac.authorization.k8s.io "testing-R" deleted
I0315 18:36:05.904] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0315 18:36:05.912] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0315 18:36:05.923] Recording: run_retrieve_multiple_tests
... skipping 50 lines ...
I0315 18:36:07.120] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0315 18:36:07.122] +++ working dir: /go/src/k8s.io/kubernetes
I0315 18:36:07.125] +++ command: run_kubectl_explain_tests
I0315 18:36:07.133] +++ [0315 18:36:07] Testing kubectl(v1:explain)
W0315 18:36:07.234] I0315 18:36:06.970481   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674966-18045", Name:"cassandra", UID:"31d71ff8-4751-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"2718", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-lppqx
W0315 18:36:07.234] I0315 18:36:06.986163   58959 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552674966-18045", Name:"cassandra", UID:"31d71ff8-4751-11e9-ab52-0242ac110002", APIVersion:"v1", ResourceVersion:"2727", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-s446c
W0315 18:36:07.234] E0315 18:36:06.990170   58959 replica_set.go:450] Sync "namespace-1552674966-18045/cassandra" failed with replicationcontrollers "cassandra" not found
I0315 18:36:07.335] KIND:     Pod
I0315 18:36:07.335] VERSION:  v1
I0315 18:36:07.335] 
I0315 18:36:07.335] DESCRIPTION:
I0315 18:36:07.335]      Pod is a collection of containers that can run on a host. This resource is
I0315 18:36:07.335]      created by clients and scheduled onto hosts.
... skipping 1161 lines ...
I0315 18:36:33.861] message:node/127.0.0.1 already uncordoned (dry run)
I0315 18:36:33.861] has:already uncordoned
I0315 18:36:33.958] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0315 18:36:34.046] (Bnode/127.0.0.1 labeled
I0315 18:36:34.146] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0315 18:36:34.221] (BSuccessful
I0315 18:36:34.221] message:error: cannot specify both a node name and a --selector option
I0315 18:36:34.221] See 'kubectl drain -h' for help and examples
I0315 18:36:34.221] has:cannot specify both a node name
I0315 18:36:34.295] Successful
I0315 18:36:34.295] message:error: USAGE: cordon NODE [flags]
I0315 18:36:34.295] See 'kubectl cordon -h' for help and examples
I0315 18:36:34.296] has:error\: USAGE\: cordon NODE
I0315 18:36:34.372] node/127.0.0.1 already uncordoned
I0315 18:36:34.455] Successful
I0315 18:36:34.455] message:error: You must provide one or more resources by argument or filename.
I0315 18:36:34.455] Example resource specifications include:
I0315 18:36:34.455]    '-f rsrc.yaml'
I0315 18:36:34.455]    '--filename=rsrc.json'
I0315 18:36:34.456]    '<resource> <name>'
I0315 18:36:34.456]    '<resource>'
I0315 18:36:34.456] has:must provide one or more resources
... skipping 15 lines ...
I0315 18:36:34.972] Successful
I0315 18:36:34.972] message:The following compatible plugins are available:
I0315 18:36:34.972] 
I0315 18:36:34.973] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0315 18:36:34.973]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0315 18:36:34.973] 
I0315 18:36:34.973] error: one plugin warning was found
I0315 18:36:34.973] has:kubectl-version overwrites existing command: "kubectl version"
I0315 18:36:35.055] Successful
I0315 18:36:35.055] message:The following compatible plugins are available:
I0315 18:36:35.055] 
I0315 18:36:35.055] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0315 18:36:35.055] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0315 18:36:35.055]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0315 18:36:35.056] 
I0315 18:36:35.056] error: one plugin warning was found
I0315 18:36:35.056] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0315 18:36:35.133] Successful
I0315 18:36:35.134] message:The following compatible plugins are available:
I0315 18:36:35.134] 
I0315 18:36:35.134] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0315 18:36:35.134] has:plugins are available
I0315 18:36:35.213] Successful
I0315 18:36:35.214] message:
I0315 18:36:35.214] error: unable to find any kubectl plugins in your PATH
I0315 18:36:35.214] has:unable to find any kubectl plugins in your PATH
I0315 18:36:35.293] Successful
I0315 18:36:35.293] message:I am plugin foo
I0315 18:36:35.294] has:plugin foo
I0315 18:36:35.372] Successful
I0315 18:36:35.373] message:Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.1226+b0494b081d5c97", GitCommit:"b0494b081d5c97c21115cd2921f7c5b536470591", GitTreeState:"clean", BuildDate:"2019-03-15T18:29:56Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0315 18:36:35.475] 
I0315 18:36:35.477] +++ Running case: test-cmd.run_impersonation_tests 
I0315 18:36:35.479] +++ working dir: /go/src/k8s.io/kubernetes
I0315 18:36:35.482] +++ command: run_impersonation_tests
I0315 18:36:35.495] +++ [0315 18:36:35] Testing impersonation
I0315 18:36:35.571] Successful
I0315 18:36:35.572] message:error: requesting groups or user-extra for  without impersonating a user
I0315 18:36:35.572] has:without impersonating a user
W0315 18:36:35.672] I0315 18:36:33.863702   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:36:35.673] I0315 18:36:33.863916   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
W0315 18:36:35.673] I0315 18:36:34.864330   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
W0315 18:36:35.673] I0315 18:36:34.864547   55641 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0315 18:36:35.773] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 56 lines ...
W0315 18:36:39.491] I0315 18:36:39.488821   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.491] I0315 18:36:39.488837   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.491] I0315 18:36:39.488909   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.492] I0315 18:36:39.488918   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.492] I0315 18:36:39.488978   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.492] I0315 18:36:39.488994   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.493] W0315 18:36:39.488996   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.493] I0315 18:36:39.489095   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.493] I0315 18:36:39.489120   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.493] I0315 18:36:39.489123   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.493] I0315 18:36:39.489129   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.494] I0315 18:36:39.489150   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.494] I0315 18:36:39.489165   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.494] I0315 18:36:39.489188   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.494] I0315 18:36:39.489194   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.495] W0315 18:36:39.489220   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.495] W0315 18:36:39.489243   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.495] I0315 18:36:39.489323   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.496] I0315 18:36:39.489335   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.496] I0315 18:36:39.489347   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.496] I0315 18:36:39.489336   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.496] I0315 18:36:39.488177   55641 autoregister_controller.go:163] Shutting down autoregister controller
W0315 18:36:39.497] I0315 18:36:39.489403   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.497] I0315 18:36:39.489412   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.497] W0315 18:36:39.489437   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.497] W0315 18:36:39.489476   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.498] W0315 18:36:39.489484   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.498] W0315 18:36:39.489519   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.498] W0315 18:36:39.489519   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.498] W0315 18:36:39.489532   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.499] W0315 18:36:39.489534   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.499] W0315 18:36:39.489561   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.499] I0315 18:36:39.489620   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.499] I0315 18:36:39.489629   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.499] I0315 18:36:39.489655   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.500] W0315 18:36:39.489656   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.500] W0315 18:36:39.489660   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.500] I0315 18:36:39.489664   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.500] I0315 18:36:39.489676   55641 controller.go:176] Shutting down kubernetes service endpoint reconciler
W0315 18:36:39.500] I0315 18:36:39.489698   55641 secure_serving.go:160] Stopped listening on 127.0.0.1:8080
W0315 18:36:39.500] I0315 18:36:39.489812   55641 secure_serving.go:160] Stopped listening on 127.0.0.1:6443
W0315 18:36:39.500] I0315 18:36:39.490335   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.501] I0315 18:36:39.495113   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.501] I0315 18:36:39.490376   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.501] I0315 18:36:39.495307   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.501] I0315 18:36:39.490546   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.501] I0315 18:36:39.490860   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.501] I0315 18:36:39.490865   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.501] I0315 18:36:39.490884   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.502] I0315 18:36:39.490891   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.502] W0315 18:36:39.490984   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.502] W0315 18:36:39.491048   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.502] I0315 18:36:39.491089   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.502] I0315 18:36:39.491112   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.502] I0315 18:36:39.491133   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.502] I0315 18:36:39.491172   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.503] I0315 18:36:39.491239   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.503] I0315 18:36:39.491278   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.503] I0315 18:36:39.491545   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.503] I0315 18:36:39.491570   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.503] I0315 18:36:39.491585   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.503] I0315 18:36:39.491612   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.503] I0315 18:36:39.491634   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.504] W0315 18:36:39.491645   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.504] I0315 18:36:39.491780   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.504] W0315 18:36:39.491835   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.504] W0315 18:36:39.491847   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.504] W0315 18:36:39.491860   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.505] W0315 18:36:39.491869   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.505] W0315 18:36:39.491885   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.505] W0315 18:36:39.491900   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.505] W0315 18:36:39.491911   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.505] W0315 18:36:39.491940   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.506] W0315 18:36:39.492041   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.506] W0315 18:36:39.492274   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.506] I0315 18:36:39.492393   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.506] I0315 18:36:39.492416   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.506] W0315 18:36:39.492571   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.506] W0315 18:36:39.492612   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.507] W0315 18:36:39.492649   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.507] W0315 18:36:39.492740   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.507] I0315 18:36:39.493044   55641 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0315 18:36:39.507] I0315 18:36:39.493063   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.507] I0315 18:36:39.493091   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.507] I0315 18:36:39.493174   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.507] I0315 18:36:39.493175   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.508] I0315 18:36:39.493176   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 29 lines ...
W0315 18:36:39.512] I0315 18:36:39.494159   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.512] I0315 18:36:39.496927   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.512] I0315 18:36:39.494163   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.512] I0315 18:36:39.496940   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.512] I0315 18:36:39.494174   55641 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0315 18:36:39.512] I0315 18:36:39.494182   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.512] W0315 18:36:39.494238   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.513] I0315 18:36:39.494357   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.513] I0315 18:36:39.494387   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.513] I0315 18:36:39.494409   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.513] I0315 18:36:39.494425   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.513] I0315 18:36:39.497031   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.513] I0315 18:36:39.494453   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 2 lines ...
W0315 18:36:39.514] I0315 18:36:39.494540   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.514] I0315 18:36:39.494545   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.514] I0315 18:36:39.494563   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.514] I0315 18:36:39.494589   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.514] I0315 18:36:39.494595   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.514] I0315 18:36:39.494633   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.514] W0315 18:36:39.494693   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.515] W0315 18:36:39.494705   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.515] W0315 18:36:39.494704   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.515] W0315 18:36:39.494714   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.515] W0315 18:36:39.494715   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.516] W0315 18:36:39.494735   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.516] W0315 18:36:39.494742   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.516] W0315 18:36:39.494748   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.516] W0315 18:36:39.494749   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.516] W0315 18:36:39.494756   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.517] W0315 18:36:39.494768   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.517] W0315 18:36:39.494773   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.517] W0315 18:36:39.494791   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.517] W0315 18:36:39.494792   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.517] W0315 18:36:39.494792   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.518] W0315 18:36:39.494801   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.518] W0315 18:36:39.494805   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.518] W0315 18:36:39.494822   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.518] W0315 18:36:39.494822   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.518] W0315 18:36:39.494825   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.519] W0315 18:36:39.494836   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.519] W0315 18:36:39.494841   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.519] W0315 18:36:39.494855   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.519] W0315 18:36:39.494863   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.519] W0315 18:36:39.494855   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.520] W0315 18:36:39.494867   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.520] W0315 18:36:39.494879   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.520] W0315 18:36:39.494893   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.520] W0315 18:36:39.494904   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.520] W0315 18:36:39.494912   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.521] W0315 18:36:39.494917   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.521] W0315 18:36:39.494911   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.521] E0315 18:36:39.494932   55641 controller.go:179] Get https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:6443: connect: connection refused
W0315 18:36:39.521] W0315 18:36:39.494948   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.521] W0315 18:36:39.494952   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.522] W0315 18:36:39.494950   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.522] W0315 18:36:39.494954   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.522] W0315 18:36:39.494969   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.522] W0315 18:36:39.494975   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.522] W0315 18:36:39.494988   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.523] W0315 18:36:39.494994   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.523] W0315 18:36:39.494996   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.523] W0315 18:36:39.494998   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.523] W0315 18:36:39.495015   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.523] W0315 18:36:39.495050   55641 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0315 18:36:39.524] I0315 18:36:39.495392   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.524] I0315 18:36:39.495457   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.524] I0315 18:36:39.495495   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.524] I0315 18:36:39.495574   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.524] I0315 18:36:39.495624   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0315 18:36:39.524] I0315 18:36:39.495672   55641 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 61 lines ...
I0315 18:49:44.232] ok  	k8s.io/kubernetes/test/integration/auth	106.560s
I0315 18:49:44.232] ok  	k8s.io/kubernetes/test/integration/client	51.120s
I0315 18:49:44.232] ok  	k8s.io/kubernetes/test/integration/configmap	5.772s
I0315 18:49:44.232] ok  	k8s.io/kubernetes/test/integration/cronjob	47.168s
I0315 18:49:44.232] ok  	k8s.io/kubernetes/test/integration/daemonset	534.297s
I0315 18:49:44.232] ok  	k8s.io/kubernetes/test/integration/defaulttolerationseconds	4.815s
I0315 18:49:44.232] FAIL	k8s.io/kubernetes/test/integration/deployment	210.432s
I0315 18:49:44.233] ok  	k8s.io/kubernetes/test/integration/dryrun	16.195s
I0315 18:49:44.233] ok  	k8s.io/kubernetes/test/integration/etcd	26.865s
I0315 18:49:44.233] ok  	k8s.io/kubernetes/test/integration/evictions	15.804s
I0315 18:49:44.233] ok  	k8s.io/kubernetes/test/integration/examples	20.231s
I0315 18:49:44.233] dangling: []v1.OwnerReference{v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foocfzjma", Name:"ownerstgc2", UID:"2ea5dcee-4752-11e9-88d3-0242ac110002", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}
I0315 18:49:44.233] waitingForDependentsDeletion: []v1.OwnerReference{v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8qd8ta", Name:"ownermjhn6", UID:"35da9291-4752-11e9-88d3-0242ac110002", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}
... skipping 18 lines ...
I0315 18:49:44.237] ok  	k8s.io/kubernetes/test/integration/storageclasses	4.917s
I0315 18:49:44.237] ok  	k8s.io/kubernetes/test/integration/tls	7.979s
I0315 18:49:44.237] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.967s
I0315 18:49:44.237] ok  	k8s.io/kubernetes/test/integration/volume	94.660s
I0315 18:49:44.237] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	148.269s
I0315 18:49:59.823] +++ [0315 18:49:59] Saved JUnit XML test report to /workspace/artifacts/junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190315-183649.xml
I0315 18:49:59.826] Makefile:184: recipe for target 'test' failed
I0315 18:49:59.836] +++ [0315 18:49:59] Cleaning up etcd
W0315 18:49:59.937] make[1]: *** [test] Error 1
W0315 18:49:59.937] !!! [0315 18:49:59] Call tree:
W0315 18:49:59.937] !!! [0315 18:49:59]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0315 18:50:00.148] +++ [0315 18:50:00] Integration test cleanup complete
I0315 18:50:00.149] Makefile:203: recipe for target 'test-integration' failed
W0315 18:50:00.249] make: *** [test-integration] Error 1
W0315 18:50:03.224] Traceback (most recent call last):
W0315 18:50:03.225]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0315 18:50:03.225]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0315 18:50:03.225]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0315 18:50:03.225]     check(*cmd)
W0315 18:50:03.225]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0315 18:50:03.225]     subprocess.check_call(cmd)
W0315 18:50:03.225]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0315 18:50:03.291]     raise CalledProcessError(retcode, cmd)
W0315 18:50:03.292] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20190125-cc5d6ecff3', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0315 18:50:03.298] Command failed
I0315 18:50:03.298] process 531 exited with code 1 after 26.6m
E0315 18:50:03.298] FAIL: ci-kubernetes-integration-master
I0315 18:50:03.298] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0315 18:50:03.841] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0315 18:50:03.905] process 127724 exited with code 0 after 0.0m
I0315 18:50:03.906] Call:  gcloud config get-value account
I0315 18:50:04.201] process 127736 exited with code 0 after 0.0m
I0315 18:50:04.202] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0315 18:50:04.202] Upload result and artifacts...
I0315 18:50:04.202] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/9502
I0315 18:50:04.202] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/9502/artifacts
W0315 18:50:05.420] CommandException: One or more URLs matched no objects.
E0315 18:50:05.566] Command failed
I0315 18:50:05.567] process 127748 exited with code 1 after 0.0m
W0315 18:50:05.567] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/9502/artifacts not exist yet
I0315 18:50:05.567] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/9502/artifacts
I0315 18:50:10.695] process 127890 exited with code 0 after 0.1m
W0315 18:50:10.696] metadata path /workspace/_artifacts/metadata.json does not exist
W0315 18:50:10.696] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...